Search Results

Search found 86974 results on 3479 pages for 'visualsvn server'.

Page 1297/3479 | < Previous Page | 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304  | Next Page >

  • What ports do I need open for IMAP connections

    - by iamjonesy
    I'm developing a web application that connects to an IMAP mailbox and fetches emails as part of it's functionality. The application is PHP and I'm connecting like this: public function connect() { /* connect to gmail */ $hostname = '{imap.gmail.com:993/imap/ssl}INBOX'; $username = $this->username; $password = $this->password; /* try to connect */ $this->inbox = imap_open($hostname,$username,$password) or die('Cannot connect to Gmail: ' . imap_last_error()); } Developing locally on my mac this was fine, I was able to connect and get emails. However now that I've put the app on my web hosts server I'm getting the following error: Cannot connect to Gmail: Can't connect to gmail-imap.l.google.com,993: Connection timed out After checking with my hosting provider they told me outgoing connections on port 993 are blocked. Is there anyway around this? Otherwise I need to upgrade to a dedicated server :S

    Read the article

  • How can I diff two Redhat Linux servers?

    - by Stuart Woodward
    I have two servers that have should have the same setup except for known differences. By running: find / \( -path /proc -o -path /sys -o -path /dev \) -prune -o -print | sort > allfiles.txt I can find a list of all the files on one server and compare it against the list of files on the the other server. This will show me the differences in the names of the files that reside on the servers. What I really want to do is run a checksum on all the files on both of the servers and compare them to also find where the contents are different. e.g find / \( -path /proc -o -path /sys -o -path /dev \) -prune -o -print | xargs /usr/bin/sha1sum Is this a sensible way to do this? I was thinking that rysnc already has most of this functionality but can it be used to provide the list of differences?

    Read the article

  • Apache+FastCGI Timeout Problem

    - by Sadjad Fouladi
    Hi all. I've recently installed mod_fastcgi and Apache 2.2. I've a simple cgi script as below (test.fcgi): #!/bin/sh echo sadjad But when I invoke "mysite.com/test.fcgi" I see "Internal Server Error" message after a short period of time. The error.log file shows this error message: [Tue Jan 31 22:23:57 2006] [warn] FastCGI: (dynamic) server "~/public_html/oaduluth/dispatch.fcgi" has failed to remain running for 30 seconds given 3 attempts, its restart interval has been backed off to 600 seconds This is my .htaccess file: AddHandler fastcgi-script .fcgi RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ django.fcgi/$1 [QSA,L] I'm very confused, please help me! [Sorry for my poor English!]

    Read the article

  • Intermittently, IIS7 requests get stuck in WindowsAuthenticationModule

    - by Richard Beier
    We're running an IIS7 server hosting several dozen websites. Several of these websites are all part of the same legacy app we've developed. These sites all run the same code and run in the same app pool. Roughly once a month over the past few months, we've found that all requests for this app pool start hanging indefinitely. When this happens, we receive an alert and we recycle the app pool. After that, the sites start working again. This only ever affects this one app pool - never any others on the same server. A couple times, before recycling the pool, I've looked at the currently-executing requests in the worker process. They all show up as executing inside the WindowsAuthenticationModule. Which is strange, because the vast majority of the application does not require authentication. There is a small admin section which uses Windows auth... but all the other requests should be anonymous. Does anyone have any idea as to what might be causing this? There are several unusual things about the way these sites are set up. As I mentioned, they all run the same code - multiple sites point at the same physical directory. The only difference is the host header bindings. I'm not sure why there isn't just one site with all the host headers, but that's how it works. In several of these sites, the same physical directory is mapped at two levels - as the root of the site and again as an application within the site. So if a user goes to http://oursite.com/index.aspx, that maps to c:\files\oursite\index.aspx. If a user goes to http://oursite.com/foo/index.aspx, that also maps to c:\files\oursite\index.aspx. I think there is code which looks at the request URL and handles the two requests differently. This is strange because the same web.config ends up being interpreted as a site config file, and also as an application config file within the site. I don't know if this might be related to the authentication problem. If we can't find the cause, we're thinking of a few workarounds we could try: Move the admin section into a separate site, and give the client a new admin URL. Run that separate site in its own app pool. Then in the web.config shared by all the other sites, remove the WindowsAuthenticationModule. That way there should be no possibility of a hang within the WindowsAuthenticationModule. Try running all these sites in the classic pipeline instead of the integrated pipeline. They were working fine on our old IIS6 server... (If we get desperate) Set up a watchdog script which monitors the sites and auto-recycles the app pool when it detects that requests are getting stuck. What do you think? Thanks for your help, Richard

    Read the article

  • nginx is not using gzip to talk to backend servers

    - by Michael Gorsuch
    Our web servers are running IIS 7 and are configured to compress dynamic and static content. When I hit these servers directly, gzip compression works. I recently placed nginx in front of them, and gzip compression has stopped. I was able to work around this by explicitly enabling gzip compression on nginx itself, but that seems a little inefficient considering I have half a dozen backends and only one active nginx box. It appears that nginx is stripping out the Accept-Encoding header. Does anyone have any advice for how to 'correct' this behavior? A sample configuration: upstream backend { server 127.0.0.1:8080; } server { listen 80; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; location / { proxy_pass http://backend; } }

    Read the article

  • Proxmox - Uploading disk image

    - by davids
    I've got a KVM Virtual Machine in my local PC, and I'd like to copy it to a Proxmox server. According to the docs, I just have to create a new VM on Proxmox and add the existing disk image to it, but how do I upload the image to the server? In the admin panel, if I click in MyStorage - Content - Upload, it just give me options to upload ISOs, VZDump backup files or OpenVZ templates. Would it be enough with a copy using scp? In that case, in which folder?

    Read the article

  • Mod_rewrite with UTF-8 accent, multiviews , .htaccess

    - by GuruJR
    Problem: with Mod_rewrite, multiview & Apache config Introduction: The website is in french and i had problem with unicode encoding and mod_rewrite within php wihtout multiviews Old server was not handling utf8 correctly (somewhere between PHP, apache mod rewrite or mysql) Updated Server to Ubuntu 11.04 , the process was destructive lost all files in var/www/ (the site was mainly 2 files index.php & static.php) lost the site specific .Htaccess file lost MySQL dbs lost old apache.conf What i have done so far: What works: Setup GNutls for SSL, Listen 443 = port.conf Created 2 Vhosts in one file for :80 and :443 = website.conf Enforce SSL = Redirecting :80 to :443 with a mod_rewrite redirect Tried to set utf-8 everywhere.. Set charset and collation , db connection , mb_settings , names utf-8 and utf8_unicode_ci, everywhere (php,mysql,apache) to be sure to serve files as UTF-8 i enabled multiview renamed index.php.utf8.fr and static.php.utf8.fr With multiview enabled, Multibytes Accents in URL works SSL TLS 1.0 What dont work: With multiview enabled , mod_rewrite works for only one of my rewriterules With multiview Disabled, i loose access to the document root as "Forbidden" With multiview Disabled, i loose Multibytes (single charater accent) The Apache Default server is full of settings. (what can i safely remove ?) these are my configuration files so far :80 Vhost file (this one work you can use this to force redirect to https) RewriteEngine On RewriteCond %{HTTPS} off RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} LanguagePriority fr :443 Vhost file (GnuTls is working) DocumentRoot /var/www/x ServerName example.com ServerAlias www.example.com <Directory "/var/www/x"> allow from all Options FollowSymLinks +MultiViews AddLanguage fr .fr AddCharset UTF-8 .utf8 LanguagePriority fr </Directory> GnuTLSEnable on GnuTLSPriorities SECURE:+VERS-TLS1.1:+AES-256-CBC:+RSA:+SHA1:+COMP-NULL GnuTLSCertificateFile /path/to/certificate.crt GnuTLSKeyFile /path/to/certificate.key <Directory "/var/www/x/base"> </Directory> Basic .htaccess file AddDefaultCharset utf-8 Options FollowSymLinks +MultiViews RewriteEngine on RewriteRule ^api/$ /index.php.utf8.fr?v=4 [L,NC,R] RewriteRule ^contrib/$ /index.php.utf8.fr?v=2 [L,NC,R] RewriteRule ^coop/$ /index.php.utf8.fr?v=3 [L,NC,R] RewriteRule ^crowd/$ /index.php.utf8.fr?v=2 [L,NC,R] RewriteRule ^([^/]*)/([^/]*)$ /static.php.utf8.fr?VALUEONE=$2&VALUETWO=$1 [L] So my quesiton is whats wrong , what do i have missing is there extra settings that i need to kill from the apache default . in order to be sure all parts are using utf-8 at all time, and that my mod_rewrite rules work with accent Thank you all in advance for your help, I will follow this question closely , to add any needed information.

    Read the article

  • "Outlook must be online or connected to complete this action" windows XP, outlook 2007, connect to exchange using HTTP

    - by bob franklin smith harriet
    Hey, I can't connect to an exchange server using windows XP and outlook 2007, using the "connect anywhere over HTTP" process, it has been working until recently and the user reports no recent changes to his environment. The error is "Outlook must be online or connected to complete this action" It will prompt me for the username and password which I can enter, then it will give the errorm however this only happens when I delete the account and enter all details for the excahnge server again. The client computer that is unable to connect using outlook can connect to the HTTPS mail service and login send/receive fine. Nobody else has reported issues. making a test environment with a clean install of XP and outlook 2007 gives the same error, but using windows 7 and outlook 2007 connects perfectly fine everytime. I also removed all passwords using control keymgr.dll which didnt help. Any assistance or ideas would be appreciated, at this point nothing I've tried from technet or google works <_<

    Read the article

  • File access with hostname or ip only - no domain?

    - by Jonathon
    It seems likely that this is an obvious question, but I'm having trouble tracking down any useful information. Normally when accessing files in a particular directory on a server, I'm able to create a virtual host, assign a domain, root directory location, etc -- however am in a situation where I have server space and need to access files with only a hostname. Is this possible? For example, let's say the hostname is 123hostname.com, and the file I want access to is in /home/sub-directory/filename.php. How do I get at it via a browser? I've tried: http://123hostname.com/home/sub-directory/filename.php ...and some other variations on that theme (that I can't post because new users are restricted to one link in messages). But generally stuck. Any help -- even if it's just to let me know that this isn't possible without some additional configuration -- would be great. Thank you!

    Read the article

  • IIS 7 with verisign certificate, invalid certificate returned

    - by bh213
    We have IIS7 on windows 2008 and we installed verisign certificate and bound it to https. Certificate seems fine. Chain: mysite.com - not expired VeriSign international server CA class 3 - not expired Verisign Class 3 Public primary certification Authority - not expired Yet when I use verisign online validation, I get that second certificate is expired. https://knowledge.verisign.com/support/ssl-certificates-support/index?page=content&id=AR1130# This is what it reports, mysite is reported to be ok: ---------------- --Issued To-- Organization: VeriSign Trust Network Organizational Unit: www.verisign.com/CPS Incorp.by Ref. LIABILITY LTD.(c)97 VeriSign Organizational Unit 2: VeriSign International Server CA - Class 3 Organizational Unit 3: VeriSign,, Inc. --Issued By-- Organization: VeriSign,, Inc. Organizational Unit: Class 3 Public Primary Certification Authority Country: US Validity Start: Wed Apr 16 17:00:00 PDT 1997 Validity End: Wed Jan 07 15:59:59 PST 2004 ---------------- Any ideas?

    Read the article

  • Re-deploy only the reports on SCOM Management Packs

    - by Gabriel Guimarães
    I've migrated Reporting Services on a SCOM 2007 R2 install, and noticed that the reports have not being copied. I can create a new report, but the ones I've had because of the management packs are gone. I've tried re-applying the Management Packs however it doesn't re-deploy them and when I try to access for example: Monitoring - Microsoft Windows Print Server - Microsoft Windows Server 2000 and 2003 Print Services - State View - select any item and click Alerts on the right menu. I get the following error: Date: 12/24/2010 12:40:35 PM Application: System Center Operations Manager 2007 R2 Application Version: 6.1.7221.0 Severity: Error Message: Cannot initialize report. Microsoft.Reporting.WinForms.ReportServerException: The item '/Microsoft.SystemCenter.DataWarehouse.Report.Library/Microsoft.SystemCenter.DataWarehouse.Report.Alert' cannot be found. (rsItemNotFound) at Microsoft.Reporting.WinForms.ServerReport.GetExecutionInfo() at Microsoft.Reporting.WinForms.ServerReport.GetParameters() at Microsoft.EnterpriseManagement.Mom.Internal.UI.Reporting.Parameters.ReportParameterBlock.Initialize(ServerReport serverReport) at Microsoft.EnterpriseManagement.Mom.Internal.UI.Console.ReportForm.SetReportJob(Object sender, ConsoleJobEventArgs args) The report doesn't exist on the reporting services side. how do I re-deploy this reports? Thanks in advance.

    Read the article

  • syntax error: unknown user 'munin' in statoverride file

    - by John
    Server running Ubuntu 12.04 lts I installed munin the other day on a server. I decided later to remove it with apt-get. I noticed that not everything was removed from the installation so manually removed the munin web directory and also removed the munin user-name and group from the sever. However I have just now tried to run apt-get upgrade which is now returning an error: dpkg: unrecoverable fatal error, aborting: syntax error: unknown user 'munin' in statoverride file E: Sub-process /usr/bin/dpkg returned an error code (2) I am now out of my depth. What does this mean? Google results have not really been helpful. Can anyone help? Thanks, John

    Read the article

  • Is there a command line two-factor authentication verification code generator?

    - by dan
    I manage a server with two-factor authentication. I have to use the Google Authenticator iPhone app to get the 6-digit verification code to enter after entering the normal server password. The setup is described here: http://www.mnxsolutions.com/security/two-factor-ssh-with-google-authenticator.html I would like a way to get the verification code using just my laptop and not from my iphone. There must be a way to seed a command line app that generates these verification codes and gives you the code for the current 30-second window. Is there a program that can do this?

    Read the article

  • 550 relay not permitted

    - by Nick Swan
    Hi, we are using Fogbugz on our server to do customer support emails. Occasionally though we get errors coming back when sending emails of: 550 relay not permitted This seems to happen at random though, sometimes sending an email to a person works, next time to the same person it'll bounce back. I've tried setting up reverse DNS with the server host and creating the SPF record in GoDaddy but we still get some of these errors. Is there anything else I can do, and is there a way of testing these are actually configured correctly? Many thanks, Nick

    Read the article

  • Persistent network share connection not working with runas

    - by binarycoder
    If I use runas /user:DOMAIN\user cmd.exe (using XP), previously mapped persistent network drives are considered unavailable. net use shows: Status Local Remote Network ------------------------------------------------------------------------------- Unavailable H: \\SERVER\SHARE Microsoft Windows Network dir H: fails with "The system cannot find the path specified.". The connection is easily revived with `NET USE H: \SERVER\SHARE': not asked for a password when I do this. What is going on? Can I make Windows safely revive this drive automatically when it is first accessed.

    Read the article

  • Issue with Exchange 2010 and Removing a Mailbox Database

    - by ThaKidd
    I did a 2003 to 2010 transition and everything is working well. During the 2010 install, a database was copied over with a random number at the end. I found out and moved three system mailboxes out of it into the database that all of the client accounts are in. I used the EMS to move those mailboxes to the other store then used the EMC to remove the mailbox database. Problem is, I am getting an error every few hours in event viewer now complaining about this database. Error is: MSExchageRepl - 4098 The Microsoft Exchange Replication service couldn't find a valid configuration for database '5f012f40-3bad-4003-a373-dbc0ffb6736f' on server 'SERVER'. Error: (nothing reported after this) Does anyone know how to fix this issue? In advance, I appreciate your help and thx for your valuable input!

    Read the article

  • Redirecting to a folder

    - by RN
    Apache2 Plesk 9.x I have a website www.example.com and my blog is on www.example.com/blog I have no content on www.example.com as of now So I want all requests for example.com to be redirected to www.example.com/blog How should I do that ? Is this something I can do in Apache? I am using the GoDaddy DNS server. Not sure if it matters- but I have multiple domains hosted n the same server. And I am using Plesk to manage my virtual hosts.

    Read the article

  • reduce memory footprint of java virtual machine

    - by Lorenzo Boccaccia
    I've a citrix server where multiple users use a multiple java application. Is there a way to reduce the memory footprint of the jvm itself? The max heap is already set fairly low (64MB), as the permgen (32MB) space and we're to the point that the jvm itself uses way more memory than the application itself (the committed area is around 350MB) I'm looking for a way to reduce the jvm ram usage or to make the all the applications run within the same jvm or any other way of sharing common pages between running jvm (if possible) or try switch to switch to a jvm if a jvm exists having optimizations relative to this scenario currently using windows 2003 server and sun java virtual machine 1.6

    Read the article

  • HD latency measurement using bonnie++ on different machines with different RAM size

    - by j0nes
    Hello, I have run bonnie++ v1.96 on two different servers without any additional load. One server is a "physical" Dell server with 32GB RAM, the other one is a virtual instance with 14GB RAM. I have read in the bonnie manuals that I should use two times the size of RAM in my bonnie runs, so I used 64GB on the physical machine and 28GB on the virtual machine. Now I want to compare the results, and I am wondering whether the results are comparable at all. The most interesting part is the latency part - on the physical machine, the values are about 10 times higher than on the virtual machine! Can I take these results seriously (e.g. the virtual machine HD is much much faster) or does the different RAM size tamper the results? Thanks! Jonas

    Read the article

  • Why does DNS work the way it does?

    - by sabof
    This is a Canonical Question about DNS (Domain Name Service). If my understanding of the DNS system is correct, the .com registry holds a table that maps domains (www.example.com) to DNS servers. What is the advantage? Why not map directly to an IP address? If the only record that needs to change when I am configuring a DNS server to point to a different IP address, is located at the DNS server, why isn't the process instant? If the only reason for the delay are DNS caches, is it possible to bypass them, so I can see what is happening in real time?

    Read the article

  • Trouble using gitweb with nginx

    - by Rayne
    I have a git repository in a directory inside of /home/raynes/pubgit/. I'm trying to use gitweb to provide a web interface to it. I use nginx as my web server for everything else, so I don't really want to have to use another just for this. I'm mostly following this guide: http://michalbugno.pl/en/blog/gitweb-nginx, which is the only guide I can find via google and is really recent. fcgiwrap apparently isn't in Lucid Lynx's repositories, so I installed it manually. I spawn instances via spawn-fcgi: spawn-fcgi -f /usr/local/sbin/fcgiwrap -a 127.0.0.1 -p 9001 That's all good. My /etc/gitweb.conf is as follows: # path to git projects (<project>.git) #$projectroot = "/home/raynes/pubgit"; $my_uri = "http://mc.raynes.me"; $home_link = "http://mc.raynes.me/"; # directory to use for temp files $git_temp = "/tmp"; # target of the home link on top of all pages #$home_link = $my_uri || "/"; # html text to include at home page $home_text = "indextext.html"; # file with project list; by default, simply scan the projectroot dir. $projects_list = $projectroot; # stylesheet to use $stylesheet = "/gitweb/gitweb.css"; # logo to use $logo = "/gitweb/git-logo.png"; # the 'favicon' $favicon = "/gitweb/git-favicon.png"; And my nginx server configuration is this: server { listen 80; server_name mc.raynes.me; location / { root /usr/share/gitweb; if (!-f $request_filename) { fastcgi_pass 127.0.0.1:9001; } fastcgi_index index.cgi; fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; include fastcgi_params; } } The only difference here is that I've set fastcgi_pass to 127.0.0.1:9001. When I go to http://mc.raynes.me I'm greeted with a page that simply says "403" and nothing else. I have not the slightest clue what I did wrong. Any ideas?

    Read the article

  • Sending an Email from 2 Mail Servers

    - by Ted Smith
    We are currently attempting to move away from using a "local" mail(exchange) server to an cloud based offering for all our automated emails. The problem is that we send and receive thousands for emails a day and its uptime is quite critical so the business do not want to put all their eggs in one basket, so if we would like to use a cloud based offering(mailgun) they would like a backup if this goes down. So my question is: Would it be possible to set multpile A, TXT and CNAME records to multiple IP address so if one mail server goes down we can automatically start sending emails from the fallover(without them being blocked doing a reverse DNS lookup)? I know we will still need to adjust the MX record for incoming emails but that is acceptable to not receive emails for a short(1-2 hours) of time. Does this make sense?

    Read the article

  • Getting Classic ASP to work in .js files under IIS 7

    - by Abdullah Ahmed
    I am moving a clients classic asp webapp to a new IIS7 based server. The site contains some .js files which have javascript but also classic asp in <% % tags which contains a bunch of conditional statements designed to spit out pieces of javascript based on session state variables. Here's a brief example of what the file could be like.... var arrHOFFSET = -1; var arrLeft ="<"; var arrRight = ">"; <% If ((Session("dashInv") = "True") And ((Session("systemLevelStaff") = "4") Or (Session("systemLevelCompany") = "4"))) Then %> addMainItem("/MgmtTools/WelcomeInventory.asp?wherefrom=salesMan","",81,"center","","",0,0,"","","","",""); <% Else %> <% If (Session("dashInv") = "False") And ((Session("systemLevelStaff") = "4") Or (Session("systemLevelCompany") = "4")) Then %> <% Else %> addMainItem("/calendar/welcome.asp","",81,"center","","",0,0,"","","","",""); <% End If %> <% End If %> defineSubmenuProperties(135,"center","center",-3,0,"","","","","","",""); Currently this file (named custom.js for example) will start throwing js errors, because the server doesnt seem to recognize the asp code in it and therefore does not parse it. I know I need to somehow specify that a .js file should also be treated like an .asp file and run through parsing it. However I am not sure how to go about doing this. Here is what I've tried so far... Under the Server node in IIS under HANDLER MAPPINGS I created a new Script Map with the following settings. Request Path: *.js Executable: C:\Windows\System32\inetsrv\asp.dll Name: ASPClassicInJSFiles Mapping: Invoke Handler only if request is mapped to : File Verbs: All verbs Access: Script I also created a similar handler under the site node itself. Under MIME Types .js is defined as application/x-javascript None of these work. If I simply rename the file to have .asp extension then things work, however this app is poorly coded and has literally 100's of files with the .js files included in them under various names and locations, so rename, search and replace is the last option I have.

    Read the article

< Previous Page | 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304  | Next Page >