Search Results

Search found 56902 results on 2277 pages for 'stefan thieme@oracle com'.

Page 728/2277 | < Previous Page | 724 725 726 727 728 729 730 731 732 733 734 735  | Next Page >

  • mod_rewrite not working?

    - by Sean Kimball
    I have a bunch of non-existent urls that need to be redirected to new ones, though they are not working... mod_rewrite does work and is enabled, I'm wondering if the redirect URL has to actually exist in order for a redirect ot work. Here is what I have: Redirect 301 /cgi-bin/commerce.cgi?display=action&emptyoverride=yes&template=Assets/XHTML/Advantage.html http://domain.com/the-bag-to-nature-advantage.html UPDATE this is the request that comes in [indexed in google!] http://domain.com//cgi-bin/commerce.cgi?display=action&emptyoverride=yes&template=Assets/XHTML/Advantage.html this is where it needs to go: http://domain.com/the-bag-to-nature-advantage.html

    Read the article

  • How to telnet into facebook chat

    - by OSX Jedi
    I was able to use facebook chat with an external application using the following information: First, find your Facebook username by going to http://www.facebook.com/your_user/. Next, Open iChat, then select iChat » Preferences and click on the Accounts tab. Click on the + (plus) sign to add a new account, with these settings: * Account Type is Jabber Account * Account name is [email protected].com, and enter your password * Click the drop-down arrow to reveal Server options. Enter chat.facebook.com as the server name. * Enter 5222 as the port and click Done. Click Done again, and you are good to go. From reading this, it seems that it might be possible to telnet into facebook chat. I tried, but wasn't able to. Is it possible? How?

    Read the article

  • Issue in extending webapplication sharepoint

    - by GHIYA
    I have extended a webapplication in a farm. main server vsmoss1 where i did vsmoss1 ->webapplication(80) vm.com -> extended web app(of above one)anonymous WFE server name vsmoss2 WFE server name vsmoss3 i have load balanced it to got to vsmoss2 and vsmoss3 when someone hits vm.com when i hit vm.com it works fine without authentication(shows content query webpart also on my page) I know there is no need to do that but when I hit vsmoss2 and vsmoss3 it shows me error on my content query webpart ....any solution for that? Finding this strange tried this : I closed both extended webapp in vsmoss2 and vsmoss3 result: site is up and running but this time with authentication I closed both extended and main webapplication site in vsmoss2 and vsmoss3 is down I closed main webapplication in vsmoss2 and vsmoss3 site is up and running without authentication Anyone is having idea why this is showing behaviour like this...?

    Read the article

  • "Recipient address rejected" when sending an email to an external address with sendgrid

    - by WJB
    In postfix, I'm using relay_host to send an email to an external address using sendgrid, but I get an error about local ricipient table when sending an email from my PHP code. This is my main.cf in /postfix/ ## -- Sendgrid smtp_sasl_auth_enable = yes smtp_sasl_password_maps = static:username:password smtp_sasl_security_options = noanonymous smtp_tls_security_level = may header_size_limit = 4096000 relayhost = [smtp.sendgrid.net]:587 This is the error message from the log: postfix/smtpd[53598]: [ID 197553 mail.info] NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 550 5.1.1 Recipient address rejected: User unknown in local recipient table; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<localhost.localdomain> One interesting thing is when I use "sendmail john@external.com" from the command line, the email is delivered successfully using SendGrid. I think it's because this uses postfix/smtp instead of postfix/smtpD the log for this says, postfix/smtp[18670]: [ID 197553 mail.info] AAF7313A7E: to=, relay=smtp.sendgrid.net[50.97.69.148]:587, delay=4.1, delays=3.5/0.02/0.44/0.18, dsn=2.0.0, status=sent (250 Delivery in progress) Thank you

    Read the article

  • Can I use Zoneedit to do URL rewrite?

    - by chilly-child
    This is our scenario: Our DNS is hosted by a company. They don't manage the DNS. We use Zoneedit (www.zoneedit.com) to manage the DNS such as nameservers, CNAMEs, etc... Then we have our web host where we just have our files hosted. We have a subdomain created on zoneedit. We would like to do a URL rewrite so that subdomain.ourdomain.com is displayed as www.ourdomain.com/subdomain. Do I use Zoneedit to do the URL rewrite or the web host or the DNS host? I checked the Zoneedit docs but I could not find a way to do a URL rewrite. Need some advice. Thanks

    Read the article

  • .htaccess configuration issue

    - by Hammad Haider
    Hi, i am using two website on one domain like: www.example.com & www.example.com/site2, i want to know that on my site2, in my site2 their are 2 folders name folder1 and folder2 my index.php is in folder2 but the defination of methods defined in folder2 i am including the files through .htaccess but i am unable to get those files which are in folder1 and getting Error-500 and 400 on browser and i am using following lines but they are not working in .htaccess file The line below works fine RedirectMatch ^/$ http://www.example.com.pk/site2/views/ AllowOverride All php_value include_path ".:/home/example/public_html/site2/system" waiting for your quick response. Thanks Regards, Hammad Haider.

    Read the article

  • need help writing puppet module for sssd.conf using Hiera

    - by mr.zog
    I need to build a module to manage /etc/sssd/sssd.conf on our Red Hat VMs. The sssd modules published on the forge don't seem to do what I want, nor do I feel like forking any of them. I want to keep all the configuration data in Hiera's common.yaml file. Below is my sssd.conf file. [sssd] config_file_version = 2 services = nss, pam domains = default [nss] filter_groups = root filter_users = root reconnection_retries = 3 entry_cache_timeout = 300 entry_cache_nowait_percentage = 75 [pam] [domain/default] auth_provider = ldap ldap_id_use_start_tls = True chpass_provider = ldap cache_credentials = True ldap_search_base = dc=ederp,dc=com id_provider = ldap ldap_uri = ldaps://lvldap1.lvs01.ederp.com/ ldaps://lvldap2.lvs01.ederp.com/ ldap_tls_cacertdir = /etc/openldap/cacerts What is the best, most economical way to build the sssd.conf file? Should I have multiple .pp files such as domain.pp, pam.pp etc. or should all the lines of configuration land in init.pp?

    Read the article

  • Sending mail to local address crashes web server (sendmail)

    - by deceze
    When trying to send mail automatically from a script at example.com via PHP's mail() to foo@example.com, the Apache server throws an Internal Error. I believe internally it is configured to use sendmail. The message gets dropped into ~/dead.letter and the general error log reads: [Wed May 12 11:26:45 2010] [error] [client xxx.xxx.xxx.xxx] malformed header from script. Bad header=/home/example/dead.letter... S: /home/example/www/test.php Trying any other address, not @example.com, works just fine. I have googled and serverfaulted for solutions, but they all require to edit configuration files in /etc/mail and similar system places, which is not an option, since this problem occurs on a shared host in which I only have access to ~/. Does anyone have a suggestion?

    Read the article

  • post-receive hook permission denied "unable to create file" error

    - by ThomasReggi
    Just got gitolite installed on my webserver and am trying to get a post-receive hook that can point the git dir in apache's direction. This is what my post-receive hook looks like. Got this script from the Using Git to manage a web site. #!/bin/sh echo "post-receive example.com triggered" GIT_WORK_TREE=/srv/sites/example.com/public git checkout -f This is the error response i'm getting back from git push origin master from my local workstation. These are files from within my repository. remote: post-receive example.com triggered remote: error: unable to create file .htaccess (Permission denied) remote: error: unable to create file .tm_sync.config (Permission denied) remote: fatal: cannot create directory at 'application': Permission denied Permissions of public. drwxr-xr-x 5 root root 4096 Jun 26 17:23 public

    Read the article

  • IIS: redirect everything to another URL, except for one Directory

    - by DrStalker
    I have an IIS server (IIS 6, Win 2003) that hosts the site http://www.foo.com. I want any request to http://foo.com (no matter what path/filename is used) to redirect to http://www.bar.org/AwesomePage.html UNLESS the request is for http://www.foo.com/specialdir, in which case the HTML files in the local directory specialdir should be used. The problem I have is once the redirect is set it also affects /specialdir - even if I right click on that directory and select "content should come from ... local directory" that change does not take effect, and the directory still shows as redirecting to http://www.bar.org/AwesomePage.html. The same thing happens if I try to set individual files to load from the local system instead of redirecting - IIS gives no error, but the change does not take effect and the files still show as being redirected. How can I set specialdir to override the redirection to the new URL?

    Read the article

  • Plus signs appearing in Google searches

    - by emddudley
    Ever since Google implemented their new look at the beginning of May, I have been having trouble with their search engine changing all of the spaces in my query to plus signs. This behavior occurs when I use the search box in both Firefox and Internet Explorer. For example, if I search for google search plus signs I am taken to the following URL, where google+search+plus+signs is in the search box. http://www.google.com/search?q=google+search+plus+signs&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a However if I perform the search from google.com, I get taken to a different URL with google search plus signs as I'd expect: http://www.google.com/#hl=en&source=hp&q=google+search+plus+signs&aq=f&aqi=g1&aql=&oq=&gs_rfai=&fp=d2a3ca21987adb1 Do I need to update my browsers or something?

    Read the article

  • Using a Custom Domain Name In Place of etsy

    - by Graviton
    I am thinking about creating an online shop at etsy, the one requirement I have is, I want user to see my domain name (www.myname.com), instead of myname.etsy.com. Given that I don't have access to the etsy server, is there thing I can do on my domain redirect( assuming I am using apache), so that whatever request on www.myname.com will be translated to the etsy side? This is so that whoever comes to my website won't see the word etsy in the url. Another particular thing is that I want my custom domain name to show in the web browser location bar when the redirect completes. Is there anyway to do this with apache?

    Read the article

  • Citrix Access Gateway not redirecting to login URL

    - by Dave
    We have an older setup for XenApp - users log in through Citrix Secure Gateway running on a windows box. (We hope to start using a NetScaler soon.) Earlier today, connections to https://citrix.company.com/ starting throwing up a 503 error page, instead of redirecting to https://citrix.company.com/Citrix/XenApp/ If you go directly to the /Citrix/Xenapp/ URL, the user is properly directed to the login page, and can launch apps. We've restarted the service, and rebooted the server. We haven't yet tried uninstalling and reinstalling the software. Before we do this, I'm looking for ideas as to how we can get the redirect working again without a fairly major outage window. To make things more interesting, many of our users have Citrix Receiver installed, also pointed at https://citrix.company.com/. Receiver makes itself the default launcher for ICA files, and gives a connection error when it tries to load apps - probably because of the same non-functional rediect?

    Read the article

  • Tmux causes Emacs glitch

    - by killy9999
    Recently I started using Tmux, but I noticed that it causes a strange Emacs glitch. When I open source code for elisp or haskell, the comments aren't highlighted. Only the comment sign is (; in case of elisp, -- in case of haskell). The rest of the commented line is in normal colour. When I run Emacs outside of Tmux everything works as expected - the whole commented line is highlighted in a colour denoting a comment. Any ideas why this is happening? SOLUTION: Based on Stefan's comment I added this to my .emacs file: (custom-set-variables (custom-set-faces '(font-lock-comment-face ((((class color) (min-colors 8) (background dark)) (:foreground "red")))))) Now the comments are displayed in red, just like comment delimiters.

    Read the article

  • tracd multiple projects+nginx reverse proxy

    - by Xeross
    I am trying to setup nginx with a reverse proxy to tracd, however I only want to use 1 tracd. Now first here's my config for this domain server { listen 80; server_name bugs.XXXXXXXX.com; access_log /var/log/nginx/XXXXXXXX-bugtracker.access.log proxy; location / { rewrite ^/bugtracker/(.*)$ /$1; rewrite ^/bugtracker$ /; proxy_pass http://127.0.0.1:81/bugtracker/; proxy_redirect default; proxy_set_header Host $host; } location ~ /\.ht { deny all; } } As you can see there's the rewrite rules, because for some reason all the urls that tracd spews out are like /bugtracker/something. Now this is indeed caused by tracd just sending urls like it normally should however trac is at bugs.XXXXXXXX.com/ and not at bugs.XXXXXXXX.com/bugtracker. So how can I make tracd/trac display the (In this case) correct urls ?

    Read the article

  • Routing a url to fetch content from another site

    - by Abhishek
    Environment: IIS 7. I have a default site www.domain.com and its folder is C:Inetpub/wwwroot/domain There is subdomain www.subdomain.domain.com and its folder is C:Inetpub/wwwroot/domain/subdomain. Now I have setup a new website at an external server. I cannot put the content on the above server due to some reasons. I need the URL www.subdomain.domain.com/blog fetch content from this external server while the URL should remain the same. How could this be achieved in IIS 7?

    Read the article

  • rsync to EC2: Identity file not accessible

    - by Richard
    I'm trying to rsync a file over to my EC2 instance: rsync -Paz --rsh "ssh -i ~/.ssh/myfile.pem" --rsync-path "sudo rsync" file.pdf [email protected]:/home/ubuntu/ This gives the following error message: Warning: Identity file ~/.ssh/myfile.pem not accessible: No such file or directory. [email protected].com's password: The pem file is definitely located at the path ~/.ssh/myfile.pem, though: vi ~/.ssh/myfile.pem shows me the file. If I remove the remote path from the very end of the rsync command: rsync -Paz --rsh "ssh -i ~/.ssh/myfile.pem" --rsync-path "sudo rsync" file.pdf [email protected].com Then the command appears to work... building file list ... 1 file to consider file.pdf 41985 100% 8.79MB/s 0:00:00 (xfer#1, to-check=0/1) sent 41795 bytes received 42 bytes 83674.00 bytes/sec total size is 41985 speedup is 1.00 ...but when I go to the remote server, nothing has actually been transferred. What am I doing wrong?

    Read the article

  • DNS on Redhat - rdnc: no server specified and no default

    - by Syahmul Aziz
    Hi all. The error as shown in the 2 pictures below: The configurations for named.conf and the zones files as shown below: After applying "alveso" suggestion below. Now, I think there is no error but I still can't ping my own domain www.p0864868.com (10.0.0.1) nor can I do host or nslookup as shown on previous pictures. PLease assist. Thank you in advance. I also attached my the changes that I made to my named.conf as well as my resolve.conf configs as shown below: progress 2: turned on logging by typping "rndc queylog" The output as below when I pinged p0864868.com progress 3: changed permission of 10-0-0.zone and p086868.zone to 644 named:named Still can't ping www.p0864868.com or execute host command. It says something like network unreachable. I don't understand why it refer to I don't what address is that.

    Read the article

  • Google SMTP settings not sending email

    - by Baboon
    I am having a hard time making the email sending in GitLab to work (changing email in profile settings). My server has exim4, I can tell its working because if I do simple mail() in PHP, it thus sends the email to the recipient. Now, in GitLab seems that it wasn't. So I modified productions.rb to have SMTP settings, and use Google SMTP: config.action_mailer.delivery_method = :smtp config.action_mailer.smtp_settings = { address: "smtp.gmail.com", port: 465, user_name: "user@gmail.com", password: "hashpassword", domain: "gmail.com", authentication: :plain, enable_starttls_auto: true } I even tried changing the port to 587 and 467 but still it doesn't work. Why is that? Can you please lead me to where I am missing?

    Read the article

  • Purpose of LAN Domain?

    - by Leonard Thieu
    What is the purpose of creating a domain name for your LAN? I'm using DD-WRT on my router and assigned local.moofz.com as the LAN domain. I setup Apache HTTP servers on two of the computers on my LAN to test it out. I could reach them on oneil.local.moofz.com and vala.local.moofz.com, but I found out that I could also reach them via their hostnames oneil and vala. If I can reach them through their host names, then what would be the purpose of having a domain name for my LAN?

    Read the article

  • Postfix - how to redirect email if they will rejecting?

    - by Bartosz Kowalczyk
    I have problem with spam and postfix + postgray. It generally good works but I have false-positive still and reject good email. And now I have problems. Can I configure postfix (and postgray) that: if_reject than redirect to spam@mydomain.com (change recipients). Or I don't know maybe: Each email have to copy and send to spam@mydomain.com Then filtering? If hit restriction than just reject (another copy is in [email protected]) How to do it? Sorry for my english. Can you help me? Thank you

    Read the article

  • Puppet apache module causing 'Error 400 on SERVER: Invalid parameter identifier'

    - by Andy Shinn
    I am receiving the following error when trying to use the latest puppetlabs-apache module from github (https://github.com/puppetlabs/puppetlabs-apache): Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Invalid parameter identifier at /etc/puppet/environments/apache_update/modules/apache/manifests/mod.pp:40 on node zordon.mydomain.com Warning: Not using cache on failed catalog Error: Could not retrieve catalog; skipping run My node config looks like: node 'zordon.mydomain.com' { include template::common include template::puppetagent include template::lamp User::Create sudo::conf { 'joe': priority = 60, content = 'joe ALL=(ALL) NOPASSWD: ALL', require = User::Create['joe'], } } The template::lamp class is what uses apache module: class template::lamp { include myfirewall Firewall Firewall class { 'apache': } class { 'apache::mod::php': } class { 'apache::mod::ssl': } class { 'mysql::server': } } It looks like serverfault markup is getting garbled on Puppet realize statements. The User::Create and Firewall lines are just realizing a user and 2 firewall rules. I have verified that the /var/lib/puppet/lib/puppet/type/a2mod.rb type has the identifier parameter and it is the same MD5 as the server. I am using Puppet 3.0.1 on both agent and master. Any idea what may cause this?

    Read the article

  • Basic clarification about Limited FTP/sFTP users

    - by mattewre
    I would like to get some clarification about the correct way to create limited users to access to my VPS user as WEBSERVER with Nginix. I'm used to NOT install FTP and access via SFTP only. It is ok for every set up? this is what I usually do from to create a limited user called "admin" that should be able to have access via SFTP to the folder with the website data mkdir -p /var/www/mysite.com/ adduser admin adduser admin www-data chown -R root:root /var/www chmod -R 755 /var/www chmod -R 755 /var/www/mysite.com chown -R admin:www-data /var/www/mysite.com/ It seems not to be the correct way, I always have problems with permission when I upload some files (for example with Wordpress in general). I would like to create an user that does work exactly as the one that the "provides" give to their client when they buy an Hosting service (that is a FTP, I would prefer SFTP access). It is for personal user, but I think that a limited user is a lot safer to use then the "root" via SFTP.

    Read the article

  • rsync option --log-file seems is not working

    - by user1017735
    I am using the --log-file option in rsync to see the logs. But when I tried to run, it says: --log-file unrecognized option Here is my command: #/usr/bin/rsync -av -u --log-file="/sreeni/log.txt" --rsync-path=/usr/local/bin/rsync /sreeni nnmhpt20.ind.hp.com:/sreeni could some one help me with the right syntax ? I tried these options also. #/usr/bin/rsync -av -u --log-file /sreeni/log.txt --rsync-path=/usr/local/bin/rsync /sreeni nnmhpt20.ind.hp.com:/sreeni and #/usr/bin/rsync -av -u --log-file="/sreeni/log.txt" --rsync-path=/usr/local/bin/rsync /sreeni nnmhpt20.ind.hp.com:/sreeni

    Read the article

  • IIS 7 with verisign certificate, invalid certificate returned

    - by bh213
    We have IIS7 on windows 2008 and we installed verisign certificate and bound it to https. Certificate seems fine. Chain: mysite.com - not expired VeriSign international server CA class 3 - not expired Verisign Class 3 Public primary certification Authority - not expired Yet when I use verisign online validation, I get that second certificate is expired. https://knowledge.verisign.com/support/ssl-certificates-support/index?page=content&id=AR1130# This is what it reports, mysite is reported to be ok: ---------------- --Issued To-- Organization: VeriSign Trust Network Organizational Unit: www.verisign.com/CPS Incorp.by Ref. LIABILITY LTD.(c)97 VeriSign Organizational Unit 2: VeriSign International Server CA - Class 3 Organizational Unit 3: VeriSign,, Inc. --Issued By-- Organization: VeriSign,, Inc. Organizational Unit: Class 3 Public Primary Certification Authority Country: US Validity Start: Wed Apr 16 17:00:00 PDT 1997 Validity End: Wed Jan 07 15:59:59 PST 2004 ---------------- Any ideas?

    Read the article

< Previous Page | 724 725 726 727 728 729 730 731 732 733 734 735  | Next Page >