Search Results

Search found 56562 results on 2263 pages for 'gerald fauteux@oracle com'.

Page 930/2263 | < Previous Page | 926 927 928 929 930 931 932 933 934 935 936 937  | Next Page >

  • Mercurial changeset hook problem when auto updating. Server permissions maybe??

    - by Gary Willoughby
    I am using Mercurial SCM over a LAN using a normal shared folder instead of http and i'm having a problem getting the auto update hook to run. I have entered this hook as detailed here: http://mercurial.selenic.com/wiki/FAQ#FAQ.2BAC8-CommonProblems.Any_way_to_.27hg_push.27_and_have_an_automatic_.27hg_update.27_on_the_remote_server.3F This installs the hook, but when i push something to the remote repo i get an error: added 1 changesets with 1 changes to 1 files running hook changegroup: hg update >&2 warning: changegroup hook exited with status -1 There is a stackoverflow question similar to this here: http://stackoverflow.com/questions/2885246/mercurial-auto-update-problem but it offers no solutions other than it may be a permissions error somewhere. Has anyone else had this problem and can anyone else shed any more light on this or give me a heads up on where to start fixing this? Thanks.

    Read the article

  • Sending large files - do any vendors sell their solution?

    - by Rob Nicholson
    We currently have an account with www.mailbigfile.com to allow us to send & receive files which exceed our client's email limits. In our industry, a 10MB limit is not unknown. Mailbigfile works fine for what it is but increasingly, our clients are starting to block it as a security risk. A solution would be for us to license the software and run it from our own web server which is far less likely to be blocked. Does anyone know of vendors in this market? We are looking at web collaboration systems but that's a much bigger project. The technology behind www.mailbigfile.com isn't that complex (http upload, email notification and then http download) so I'm hoping it won't be very expensive. Cheers, Rob.

    Read the article

  • Redirect to new domain and preserve username

    - by David Brown
    I recently switched to a new domain for a version control server I run. The server is usually accessed with a username included in the url such as https://[email protected]/some/stuff. I want to redirect requests to the old domain to the new domain and preserve everything else in the url (including the username). So the former url would be redirected to https://[email protected]/some/stuff. Currently I have the following rewrite condition and rule: RewriteCond %{HTTP_HOST} sub\.olddomain\.com RewriteRule (.*) https://sub.newdomain.net$1 [R=301,L] This works except it drops the userinfo part of the URL. Is there a way I can preserve the user info?

    Read the article

  • Nginx fastcgi problems with django

    - by wizard
    I'm deploying my first django app. I'm familiar with nginx and fastcgi from deploying php-fpm. I can't get python to recognize the urls. I'm also at a loss on how to debug this further. I'd welcome solutions to this problem and tips on debugging fastcgi problems. Currently I get a 404 page regardless of the url and for some reason a double slash For http://www.site.com/admin/ Page not found (404) Request Method: GET Request URL: http://www.site.com/admin// My urls.py from the debug output - which work in the dev server. Using the URLconf defined in ahrlty.urls, Django tried these URL patterns, in this order: ^listings/ ^admin/ ^accounts/login/$ ^accounts/logout/$ my nginx config server { listen 80; server_name beta.ahrlty.com; access_log /home/ahrlty/ahrlty/logs/access.log; error_log /home/ahrlty/ahrlty/logs/error.log; location /static/ { alias /home/ahrlty/ahrlty/ahrlty/static/; break; } location /media/ { alias /usr/lib/python2.6/dist-packages/django/contrib/admin/media/; break; } location / { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:8001; break; } } and my fastcgi_params fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param PATH_INFO $fastcgi_script_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; And lastly I'm running fastcgi from the commandline with django's manage.py. python manage.py runfcgi method=threaded host=127.0.0.1 port=8080 pidfile=mysite.pid minspare=4 maxspare=30 daemonize=false I'm having a hard time debugging this one. Does anything jump out at anybody?

    Read the article

  • How to have Jetty redirect http to https

    - by Noel Kennedy
    I want to redirect all requests for http to https using Jetty (6.1.24). For some reason (my ignorance) this is eluding me. This is what I have: <New id="redirect" class="org.mortbay.jetty.handler.rewrite.RedirectPatternRule"> <Set name="pattern">http://foobar.com/*</Set> <Set name="location">https://foobar.com</Set> </New> In response I get 200 - ok, and the body is the page over http, ie the redirect doesn't occur.

    Read the article

  • Redirect domains with lighttpd?

    - by matt
    I'm working on getting a Wordpress MU install running on my VPS. I enabled the 'simple-vhost' mod and can access the site fine. The problem is I can only get to it from domain.com. If I try www.domain.com I get shown the lighthttpd page? I'd like to get everything pointing to one place. My DNS records look like this: *.domain.ORG xx.xx.xx domain.ORG 300 A domain.ORG xx.xx.xx domain.ORG 300 A WWW.domain.ORG domain.ORG domain.ORG 300 CNAME domain.ORG domain.ORG domain.ORG 300 MX What is happening? Thanks

    Read the article

  • Chrome: automatically redirect me to highest ranking search result, like Firefox does

    - by Siim K
    How to emulate the Firefox (I'm using v3.6) address bar search redirection in Google Chrome? For example, if I type... imdb moon ...to the address bar and press Return in Firefox then it redirects me straight to http://www.imdb.com/title/tt1182345/ (and I've not visited the page before) When I try this is Chrome then I just get the google search page http://www.google.com/search?sourceid=chrome&ie=UTF-8&q=imdb+moon So seems like Firefox redirects automatically to the highest ranking search result URL - is there a setting or add-on for Chrome to achieve the same behaviour?

    Read the article

  • Ubuntu 12.04 host lookups extremely slow

    - by tubaguy50035
    I'm having issues with one of my servers taking a long time to look up host names. This is an Ubuntu 12.04 box, so I've tried following the new resolvconf directives. In my /etc/network/interfaces file, I defined my name servers like this: auto eth0 iface eth0 inet static address someaddress netmask 255.255.255.0 gateway 198.58.103.1 dns-nameservers 74.14.179.5 72.14.188.5 In my /etc/resolv.conf, I see these name servers, like this: # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN nameserver 74.14.179.5 nameserver 72.14.188.5 On another box, I edited the resolv.conf directly as directed by my hosts' setup help files. It looks like this: domain members.linode.com search members.linode.com nameserver 72.14.179.5 nameserver 72.14.188.5 options rotate This second box has no issues with host name look ups and responds quite quickly. Could not having the domain and search directives make my look ups slow? By slow, I mean it's taking anywhere from 5 to 15 seconds to find the IP address of a host.

    Read the article

  • `wget` is not recognized or either can't find the file

    - by clankill3r
    if i use cd C:\Program Files (x86)\GnuWin32\bin then i'm able to use wget commands for example: wget http://www.ultralightnews.com/trikes/images/trikes/dfs-singletrike.jpg but i can't find the file back, i looked in c:\ and in the bin folder mentioned above and in GnuWin32\etc If i try wget -O C:\Users\clankill3r\Downloads\wgetfolder wget http://www.ultralightnews.com/trikes/images/trikes/dfs-singletrike.jpg then it says Permision denied, i did allow all permisions possible for every group / user. Some people say it downloads to the current folder your working in (that's why i looked in the bin). But i thought let's try to run the command from another folder so i used: 'cd C:\Users\clankill3r\Downloads\wgetfolderand then the wget comman but then it says thewget` command is not recognized. can someone help?

    Read the article

  • Best way to export an icon from photoshop?

    - by Mudtail
    Hi, I'm writing a program to notify me of new email. It's mostly complete, I'm just working on the notifyicon code now. It's supposed to display the usual application icon with a box containing the count of unread emails inside it. I created icons for this in photoshop, exported them as 16x16 transparent PNGs, then converted them into windows ico files using ConvertIcon.com. Given that the image was 16x16 and the WinXP system tray uses 16x16 icons, I would assume the images would work. HOWEVER, when I start the application and get an email, the icon's all blurry: http://cyndle.com/bPJ7

    Read the article

  • Cron Jobs unable to deliver email error report

    - by root
    I am sure I have right syntax and I am still unable to receive report emails to my email address. My OS is CentOS 6.4. My Crontab script is MAILTO="cron@mydomain.com" * * * * * /usr/bin/php5 /home/myusername/public_html/cron.php /post/find_submit_test/1/ Email address cron@mydomain.com is working fine and I tested sendmail from ssh which is too working fine but cron reports are unable to be delivered. I checked WHM for notification settings couldn't even find anything relevant there. Please advice me how to fix this. Thanks

    Read the article

  • View Public Key in Domain Key for a Domain

    - by Josh
    Using Jeff's blog post I'm creating domain keys for my account. I wanted to verify the setup using Get or Host command with Bind for Windows but I'm lost one of the commands. I can see view the _domainkey. txt file with this command: host -t txt _domainkey.stackoverflow.com but I'm at a loss at how I'd find the selector record. Jeff points out it can be anything before the before the period in "._domainkey.domain.com" but how would I list all records if I didn't know the exact query name? Is there a wildcard I could use to view all TXT or all records under this section?

    Read the article

  • Is there a free tool/package that can monitor web traffic and display URLS accessed? [closed]

    - by Anthony
    I couldn't find a similar question but then maybe I am searching for the wrong terms. A few years ago I used a router like device, I'm pretty sure it was a SonicWall, that did this on a clients site. Basically all traffic would be routed through this device and it allowed the manager/administrator to inspect web usage of the workers, determine how often certain resources were accessed and block them if necessary (much like content filter). It showed reports based on domain name reached etc. Facebbok.com, Bebo.com and so on. It also displayed the usual IP traffic information etc. it was a UTM also. I have tried Endian firewall, with it's NTOP install, but I don't think that will show URLs browsed. Maybe I just haven't found it in NTOP yet? I need this to troubleshoot connection and traffic issue at my home, with about twenty devices/users so didn't want to buy a dedicated solution and have spare hardware to use a community product.

    Read the article

  • Differences between FCKeditor and CKeditor?

    - by matt74tm
    Sorry, but I've not been able to locate anything (even on the official pages) informing me of the differences between the two versions. Even a query on their official forum has been unanswered for many months - http://cksource.com/forums/viewtopic.php?f=11&t=17986&start=0 The license page talks a bit about the "internal" differences http://ckeditor.com/license CKEditor Commercial License - CKSource Closed Distribution License - CDL ... This license offers a very flexible way to integrate CKEditor in your commercial application. These are the main advantages it offers over an Open Source license: * Modifications and enhancements doesn't need to be released under an Open Source license; * There is no need to distribute any Open Source license terms alongside with your product and no reference to it have to be done; * No references to CKEditor have to be done in any file distributed with your product; * The source code of CKEditor doesn’t have to be distributed alongside with your product; * You can remove any file from CKEditor when integrating it with your product.

    Read the article

  • how to rewrite or redirect old or missing or invalid url to 404 page

    - by kath
    I recently upgraded a site and almost all URLs have changed. I have redirected all of them (or so I hope) but it may be possible that some of them have slipped by me. Is there a way to somehow catch all invalid URLs and send the user to a certain page I am using PHP Thanks so much! error file is already in .htaccess but seems nothing going to change you can see the error file as below AddHandler application/x-httpd-php5s .php ErrorDocument 404 /content/404.php <IfModule mod_rewrite.c> RewriteEngine on RewriteBase / here are 2 different url one the first one is old one which i edited and the secound one is edited one #1 old one (which is no longer on the server) http://adsbuz.com/vehicles-cars/toyoya/2009-toyota-land-cruiser-gxr-4686.htm #2 the editet one which is on the server http://adsbuz.com/vehicles-cars-for-sale/toyoya/2009-toyota-land-cruiser-gxr-4686.htm i need only the secound one with the vehicles-cars-for-sale because the other directory is already modified and its not on the server but as you can see after the (adsbuz) site name vehicles-cars and vehicles-cars-for-sale both are opening for same location I hope I made myself clear

    Read the article

  • Block site on a PC logged into a domain and using a proxy

    - by Rauf
    I read lot of posts related with blocking sites. Most of the posts says to edit hosts file. I know it is a good method. But this one is not working for me. Can you guess what is the issue by analyzing the following details, My PC is joined to a domain and using proxy settings, and the logged in user having administrator privileges. After reading some answers, I did the following Changed the hosts file to have # 38.25.63.10 x.acme.com # x client host 127.0.0.1 localhost 127.0.0.1 www.facebook.com Added no proxy for facebook, Still, it is not working. Why ?

    Read the article

  • Adding SSL to Heroku site post launch

    - by dineth
    I have a rails API that I want to deploy on Heroku. $20/month for a SSL site on heroku is a little steep given I am not earning anything out of this app yet. I am after advice and wondering if it is possible to add SSL sometime in the future? This is for a iOS app that I'm writing. Basically the idea would be that I continue to use https://myapp.heroku.com through their piggyback SSL. Once I get some cash in, I want to transition to using https://www.myapp.com. At this point the API would still need to work for app users who haven't upgraded to a new version of the app that points to the new domain. Anyone know if this is possible? Would both URLs continue to work? My gut feeling tells me this is not possible. Any advice would help. Thanks!

    Read the article

  • ProFTPd: Multiple Domain VirtualHosts on one IP address

    - by Badger
    I have a webserver that we are giving a consultant FTP access to. For one domain hosted on that server he needs access to a "dev" directory and for a different domain hosted on that server he needs access to a different directory. I am trying to set this up with VirtualHosts, but I am having issues. Here is the VirtualHost bit of my proftpd.conf file: <VirtualHost www.example2.com> ServerName "Example 2" DefaultRoot /var/www/example2/dev </VirtualHost> <VirtualHost www.example1.com> ServerName "Example 1" DefaultServer on DefaultRoot /var/www/example1 </VirtualHost> When I FTP to either domain I always get the first VirtualHost, even if I FTP to the second domain.

    Read the article

  • Server speed: sharing one script.php or using many copies the same script.php

    - by Marco Demaio
    Let's assume: I have thousands of domains on same Apache server. Each domain is in a folder under server public_html document folder, so it can be accessed by calling "www.somedomain.com" or by calling "www.serverdomain.com/somedomain_folder" In each domain there is a website who needs a certain script.php (identical for each domain). From a coding point view, its obvious that it's better to use a unique script.php, so when i update it with new features/bug fixes etc, I need to update on server only one file and it will work for all domains. But from a server point of view? If i use a unique script all domains will access it at the same time, will the server run slower compared to the situation where each domain called its own script?

    Read the article

  • rewrite redirect issue in debian squeeze

    - by hd01
    My server os is debian squeeze. I have these lines to redirect non-www to www in htaccess file of my website: RewriteCond %{HTTP_HOST} !^www\.example\.com$ [NC] RewriteRule ^(.*)$ http://www.example.com/$1 [L,R=301] but it cause this error in firefox: The page isn't redirecting properly Firefox has detected that the server is redirecting the request for this address in a way that will never complete. This problem can sometimes be caused by disabling or refusing to accept cookies. when I comment those lines in htaccess mysite appears but in non-www format. I'm sure it works well before on the Ubuntu . but I don't know why it doesn't work now. would you help me?

    Read the article

  • How to control fan speed using SpeedFan?

    - by John Young
    The CPU fan in my laptop is running too fast. I wish to control the speed manually to my preference, at times. Here is a CPU-Z screenshot of my laptop configuration: http://i.stack.imgur.com/1oST1.png And here is how SpeedFan window looks at my end: http://i.imgur.com/BIi0RdJ.jpg I have no idea how to use SpeedFan to control my laptop's CPU fan speed. How to configure it so that I can increase and decrease speed of the fan at my will? Edit: Sorry, the first image wasn't as intended, so I've corrected it now. Also, if someone can edit the post and embed the images in the post, that would be great. I need at least 10 reputation to successfully accomplish that.

    Read the article

  • Sendmail problem

    - by trobrock
    I am trying to get my server to be able to send email from PHP. Currently it is using send mail, but whenever I try to send mail to a gmail address I get this sort of response: --o54Mqd5s008981.1275691959/ServerName Content-Type: message/delivery-status Reporting-MTA: dns; ServerName Received-From-MTA: DNS; localhost Arrival-Date: Fri, 4 Jun 2010 22:52:38 GMT Final-Recipient: RFC822; someone@gmail.com Action: failed Status: 5.7.1 Remote-MTA: DNS; gmail-smtp-in.l.google.com Diagnostic-Code: SMTP; 550-5.7.1 [xxx.xxx.xxx.xxx] The IP you're using to send mail is not authorized Last-Attempt-Date: Fri, 4 Jun 2010 22:52:39 GMT How can I set this up to relay through a google account that I own? Is sendmail the best thing to use, or should I switch to Postfix or something? This is on an Ubuntu Server 9.10

    Read the article

  • Logfile software for making querys, extracting and other operations

    - by Juw
    I have written an app that connects to a server IIS 6 to retrieve information. When doing this i have collected data (phone model etc) and send it to the server with a regular GET HTTP call like this: http://www.myserver.com/getData.php?phonemodel=userphone&appversion=2&id=20 This is logged in the IIS logfiles. I thought of writing my own parser for log files. But why invent the wheel? I´m looking for a software that can read the IIS 6 logfiles. I would like it to be able to do: Extraction - Extract all lines that contains: www.myserver.com/getData Filtering - View all lines where http-code is not 200 Queries - View all lines where phonemodel=iphone Any tips on free software that can help me with this? Thanx in advance!

    Read the article

  • Cannot access shares via \\servername but \\ip works

    - by Jeff
    To set up the scenario: One of our techs set one of the domain controllers to use Microsoft time. The time IS correct (including Time Zone) and DOES match the other domain controller's time; it was previously incorrect, however. Since the change, no users can connect via \\servername\share or \\servername.domainname.com but \\ip\share works fine. I cannot even access it from the other domain controller with which I know both have the same time. The servername DOES resolve to the correct IP address. Also, strangely enough, \\domainname.com works as well which resolves to the same server. Finally Everything that I have tried does resolve to the same, correct IP address. The error is: login failure: The target account name is incorrect. I believe it is time related but since the times are correct and match I'm not sure. Anyone know what might cause this?

    Read the article

  • Asterisk doesn't start properly at system startup. DNS lookup fails.

    - by leiflundgren
    When I start my Ubuntu system it attempts two DNS lookups. One to find out what my internet-routers external ip is. And one to find the IP of my PSTN-SIP-provider. Both fails. [Apr 7 22:14:54] WARNING[1675] chan_sip.c: Invalid address for externhost keyword: sip.mydomain.com ... [Apr 7 22:14:54] WARNING[1675] acl.c: Unable to lookup 'sip.myprovider.com' And since the DNS fails it cannot register properly a cannot make outgoing or incoming calls. If I later, after bootup, restart asterisk everything works excelent. Any idea how I should setup things so that either: Delay Asterisk startup so that DNS is up and healthy first. Somehow get Asterisk to re-try the DNS thing later. Regards Leif

    Read the article

< Previous Page | 926 927 928 929 930 931 932 933 934 935 936 937  | Next Page >