Search Results

Search found 19375 results on 775 pages for 'codeigniter url'.

Page 643/775 | < Previous Page | 639 640 641 642 643 644 645 646 647 648 649 650  | Next Page >

  • Improving performance by using an additional static file server

    - by Max
    Hello there, I´m planning for a large website that includes many static assets (js, css, images and thumbnails) in the generated pages. That website will use TYPO3 as CMS (is is a customer requirement). I guess I could seriously improve performance / page load times by using a two server setup. One server where the main application (PHP) runs and another one where the static files sit being served by a trimmed down version of apache or something like lighthttpd. Including e. g. js or css files from the file server is of course no big deal. Just use an absolute url http://static.example.com/js/main.js and be done with it. But: that website will have pages with MANY thumbnails of e. g. product images on it. So I see two problems when the main application tries to create a thumbnail of some image: the original image like products/some.jpg is uploaded on the static file server and therefore not on the same server as the PHP application which tries to create the thumbnail. TYPO3 writes created thumbnails to a temp directory which is expected to be on the same server. Therefore, hundreds of thumbnails will be written and served from that temp directory which is on the same server as the main application - the static file server is in that case basically useless, all thumbnails will be requested from the server of the main application. So, my question is: how to overcome this shortcomings? Is it possible to "symlink" some directories to another server? So, for example, if PHP tries to open the original products image for thumbnail creation with imagecreate("products/some.jpg") the products folder actually "points" to the products folder on the static image server? I know something like this can be done with .htaccess but is it possible on file system level?

    Read the article

  • Configure Nginx to render static files and rewrite file extension or proxy_pass

    - by Pardoner
    I've set up Nginx to handle all my static files else proxy_pass to a Node.js server. It's working fine but I'm having difficulty rewriting the url so that it remove the .html file extension. upstream my_upstream { server 127.0.0.1:8000; keepalive 64; } server { listen 80; server_name staging.mysite.com; root /var/www/staging.mysite.org/public; access_log /var/logs/staging.mysite.org.access.log; error_log /var/logs/staging.mysite.org.error.log; location ~ ^/(images/|javascript/|css/|robots.txt|humans.txt|favicon.ico) { rewrite (.*)\.html $1 permanent; try_files $uri.html $uri/ /index.html; access_log off; expires max; } location / { proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_set_header Connection ""; proxy_http_version 1.1; proxy_cache one; proxy_cache_key sfs$request_uri$scheme; proxy_pass http://my_upstream; } }

    Read the article

  • 550 Requested action not taken: mailbox unavailable on OS X server 10.6

    - by Marc Graham
    I recently added a new domain to my mail server. I have 1 main server mail.example.com and several others that have the mx record pointing to mail.example.com. My two new domains have the mx record set correctly. The issue I am experiencing is the 550 Requested action not taken: mailbox unavailable error but only when I send emails to accounts on the new urls from an external email account such as gmail. If i send an email to one of the newly made email addresses with the new url from an email account within the same server it delivers normally. For example.... sending [email protected] to [email protected] receives 550 error sending [email protected] to [email protected] works normal here is a report from wormly.com with server and account names changed for obvious reasons Resolving hostname... Connecting... SMTP -> FROM SERVER: 220 existingmailserver.com ESMTP Service ready SMTP -> FROM SERVER: 250-Requested mail action okay, completed 250-SIZE 0 250-AUTH LOGIN PLAIN CRAM-MD5 250-ETRN 250-8BITMIME 250 OK MAIL FROM: [email protected] SMTP -> FROM SERVER: 250 Requested mail action okay, completed RCPT TO: [email protected] SMTP -> FROM SERVER: 550 Requested action not taken: mailbox unavailable SMTP -> ERROR: RCPT not accepted from server: 550 Requested action not taken: mailbox unavailable Message sending failed.

    Read the article

  • Windows Vista Wrong Certificate With SNI

    - by JamesArmes
    I'm setting up SNI on an apache server and I thought things were going well. I have two URLs from different domains that point at the same site. I have one virtual host setup for each with the appropriate certificate for each. One of the certificates is valid but the other is self-signed (waiting on GoDaddy for the real cert). If I test the different URLs in Firefox, Safari and Opera all works well. I get no errors for the URL with the valid certificate and I get a self-signed warning for the other. However, in Internet Explorer 8 and Google Chrome, both URLs return the valid certificate (even if its not valid for the specific site). So for the one site, I get a valid certificate. For the other, I get a warning about the cert being for a different site. I tried switching the order of the vhosts and it made no difference. I know that Chrome and IE both use Window's HTTP stack so I understand why the behavior is the same for the two. What I don't understand is why I'm seeing this behavior.

    Read the article

  • Requests are making it to my app server, but not into node.js -- why?

    - by Zane Claes
    I detailed in this question on StackOverflow how some random requests are not making it from the client to my Node.js app server, resulting in a gateway timeout. In summary, identical requests are, at random, not even making it far enough to trigger a console.log() in my first line of express middleware. I need to narrow down the problem, though, to find out WHERE the traffic is being lost and it was suggested that I try a packet sniffer on my app servers. Here's my setup: 2x Load Balancers (m1.larges) 2x node.js servers (also m1.large) Here's what's interesting/unusual: the node.js servers started as PHP servers with an Apache stack and continue to serve PHP files for my domain (streamified.me). However, I use a little httpd.conf magic on the app servers so that requests to api.streamified.me get routed over port 8888 to the node.js server: RewriteCond %{HTTP_HOST} ^api.streamified.me RewriteRule ^(.*) http://localhost:8888$1 [P] So, the request hits the load balancer = goes to an app server = gets routed to port 8888 if it's intended for the API = gets handled by node.js So, in the same httpd.conf file, I turned on RewriteLogLevel 5 and then created a simple PHP+CURL script on my localhost to hit my api.streamified.me with a random URL (which should cause node.js to trigger a simple "not found" response) until it resulted in a Gateway timeout. Here, you can see that it has happened -- and the rewrite log shows that the request was definitely received by the app server and forwarded to port 8888... but it was never received by node.js (or, at least, the first line of code in the first line of middleware never gets it...) Image Link: http://i.stack.imgur.com/3OQxS.png

    Read the article

  • How does Safari's Reader work and when does it show up?

    - by TestSubject528491
    Safari's Reader feature is a cool little app that displays a web page as a newspaper article --- without all the distracting sidebars, comments, and ads. Sometimes it works and sometimes it doesn't, and I'm wondering how "it knows when to show up." On my personal website, one of the pages has this option. You can click the Reader button in the URL bar and it is displayed beautifully like a page in an iBook. However, none of my other web pages (on the same site) do this. I thought it had something to do with the <article> tag, but I removed that and it still works. Anyone know how this app works? Also, does anyone know of any Chrome extensions that are just like this? Google Reader is not the same thing. PS: From the cited Apple website: Safari Reader As you browse the web, Safari detects if you are on a web page with an article. Click the Reader button that appears in the Smart Address Field and an elegant view of the article appears — without any distracting content. Not much help, is it?

    Read the article

  • Best Asp.net Hosting

    - by dotnetguts
    There are many asp.net web hosting companies which spends lot on advertisement and also gives you very cheaper rate, as low as $5, but when it comes to support they are simply hopeless. Everyone can you please pass your experience with your past hosting companies and suggest any good asp.net hosting company? Please consider following requirement factors 1) Asp.net 3.5 or 4.0 supported. 2) Url Rewriter support 3) GZip support (Dynamic through code) 4) Initial Setup support (If required) 5) SQL Server 2005 or 2008 6) Allow to access SQL Server DB using SQL Mgmt Studio 7) Environment supporting Backup and Restore of DB on my own, without involving tech support team 8) Full Text Search support 9) FTP support 10) I can able to send atleast 500 Emails daily. 11) 99.9% Up Time (No matter all web hosting say they have 99.9% Up Time, but its not true). 12) Alert Email to be sent when they do any maintenance or during downtime. 13) Hosting Price should be reasonable. Incase you feel i am missing something please add to the list. Can anyone suggest good webhosting company based on above factors?

    Read the article

  • How can I use wildcards in an Nginx map directive?

    - by Ian Clelland
    I am trying to use Nginx to served cached files produced by a web application, and have spotted a potential problem; that the url-space is wide, and will exceed the Ext3 limit of 32000 subdirectories. I would like to break up the subdirectories, making, say, a two-level filesystem cache. So, where I am currently caching a file at /var/cache/www/arbitrary_directory_name/index.html I would store that instead at something like /var/cache/www/a/r/arbitrary_directory_name/index.html My trouble is that I can't get try_files, or even rewrite to make that mapping. My searching on the subject leads me to believe that I need to do something like this (heavily abbreviated): http { map $request_uri $prefix { /aa* a/a; /ab* a/b; /ac* a/c; ... /zz* z/z; } location / { try_files /var/cache/www/$prefix/$request_uri/index.html @fallback; # or # if (-f /var/cache/www/$prefix/$request_uri/index.html) { # rewrite ^(.*)$ /var/cache/www/$prefix/$1/index.html; # } } } But I can't get the /aa* pattern to match the incoming uri. Without the *, it will match an exact uri, but I can't get it to match just the first two characters. The Nginx documentation suggests that wildcards should be allowed, but I can't see a way to get them to work. Is there a way to do this? Am I missing something simple? Or am I going about this the wrong way?

    Read the article

  • Files deleted. What could have happened?

    - by jjfine
    I'm having a weird issue today. I was writing and testing out some simple cgi scripts this morning when I realized that I couldn't run them from one of the other computers on the (windows) network. So I had my network admin come in and take a look at what was going on. A few minutes later a co-worker came in and told me that a bunch of files he was working with as well as a bunch of others (all *.c files) on the network drive got deleted. He also noticed some strange apache_dump_500.log.txt files in the same directories where the files got deleted. The apache_dump_500.log.txt files all look like this: REDIRECT_HTTP_ACCEPT=*/*, image/gif, image/x-xbitmap, image/jpeg REDIRECT_HTTP_USER_AGENT=Mozilla/1.1b2 (X11; I; HP-UX A.09.05 9000/712) REDIRECT_PATH=.:/bin:/usr/local/bin:/etc REDIRECT_QUERY_STRING= REDIRECT_REMOTE_ADDR=<my computer's local ip> REDIRECT_REMOTE_HOST= REDIRECT_SERVER_NAME=<my computer's domain url> REDIRECT_SERVER_PORT= REDIRECT_SERVER_SOFTWARE= REDIRECT_URL=/cgi-bin/trojan.py I looked and I don't have any trojan.py in my cgi-bin folder. And all my apache logs are clean. Windows event logger seems to not have any traces of what happened either. My httpd.conf: http://pastebin.com/Yny2Yh8v I think we've got some kind of virus that added this trojan.py file to my cgi-bin, ran the script, and deleted the script and any traces from the logs. Is this a thing that happens? Any ideas whatsoever would be much appreciated!

    Read the article

  • Multiple servers vs 1 big server performace

    - by pistacchio
    Hi to all! My team of developers has suggested a server structure for an upcoming project we are developing. Our structure is "logical", meaning that the various logical components of the application (it is a distributed one) relies on different servers. Some components are more critical than others and will be subjected to more load. Our proposal was to have 1 server per component but the hardware guys suggested to replace the various machines with a single, bigger one with virtual servers. They're gonna use Blade Servers. Now, I'm not an expert at all, but my question to the guys was: so if we need, for example, 3 2GHz CPU / 2GB RAM machines and you give me 1 machine with 3 2GHz CPUs and 6 GB of RAM it is the same? They told me it is. Is this accurate? What are the advantages or disadvantages of both the solutions? What are the generally accepted best practices? Could you point out some URL reference dealing with the problem? Thank you in advance! EDIT: Some more info. The (internet / intranet) application is already layered. We have some servers on the DMZ that will expose pages to the internet and the databases are on their own machines. What we want to split (and they want to join) are some webservers that mainly expose webservices. One is a DAL that communicates with the database layer, one is our Single Sign On / User Profile application that gets called once per page and one is a clone of what seen on the Internet to be used on our lan.

    Read the article

  • SSLVerifyClient optional with location-based exceptions

    - by Ian Dunn
    I have a site that requires authentication in order to access certain directories, but not others. (The "directories" are really just rewrite rules that all pass through /index.php) In order to authenticate, the user can either login with a standard username/password, or submit a client-side x509 certificate. So, Apache's vhost conf looks something like this: SSLCACertificateFile /etc/pki/CA/certs/redacted-ca.crt SSLOptions +ExportCertData +StdEnvVars SSLVerifyClient none SSLVerifyDepth 1 <LocationMatch "/(foo-one|foo-two|foo-three)"> SSLVerifyClient optional </LocationMatch> That works fine, but then large file uploads fail because of the behavior documented in bug 12355. The workaround for that is to set SSLVerifyClient require (or optional) as the default, so now the conf looks like this SSLCACertificateFile /etc/pki/CA/certs/redacted-ca.crt SSLOptions +ExportCertData +StdEnvVars SSLVerifyClient optional SSLVerifyDepth 1 <LocationMatch "/(bar-one|bar-two|bar-three)"> SSLVerifyClient none </LocationMatch> That fixes the upload problem, but the SSLVerifyClient none doesn't work for bar-one, bar-two, etc. Those directories are still prompted to present a certificate. Additionally, I also need the root URL to accessible without the user being prompted for a certificate. I'm afraid that will cancel out the workaround, though.

    Read the article

  • How can I setup nginx to serve virtualhosts with rails(unicorn/passenger) and php-fpm

    - by NewAlexandria
    I would like to serve multiple sites on one instance. I install nginx, php-fpm, and a rails app. I use sites like this to guide me. I configure php-fpm to listen to a local socket listen = /var/run/php-fpm/php-fpm.sock I configure ngnix with multiple hosts: include /etc/nginx/conf.d/*.conf I have several site php conf files like /etc/nginx/conf.d/site1.conf server { listen 80; server_name site1.com www.site1.com; root /var/www/site1; location / { index index.html index.php; } location ~ \.php$ { fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_index index.php; include fastcgi_params; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param SCRIPT_FILENAME $document_root/$fastcgi_script_name; } } and rails site conf files like upstream rails { server 127.0.0.1:3000; } server { listen 80; server_name site2.com www.site2.com; root /var/www/site2; location / { proxy_pass http://rails; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $host; proxy_set_header X-Url-Scheme $scheme; } } I have a unicorn rails server running via rails s -p 3000 Yet, no sites come up for either site1.com or site2.com. I can get to the rails site at www.site2.com:3000 What is wrong? I've spent 2 days (nearly 30hr) trying many different blogs, SO / SF questions, etc. Please share your insight or answer. edit 1: No log entries are created when I try to visit either site. It's like the requests never come in.

    Read the article

  • MAMP Pro virtual hosts on Mountain Lion not being recognized

    - by user135242
    I'm running MAMP PRO 2.1.1 on OSX 10.8.1 and not having any luck getting Apache to recognize any hosts that have been set up in MAMP besides localhost. This only happens when trying to use port 80. When I switch to port 8888 everything runs fine. Mountain Lion's Apache has been disabled and I get no errors or warnings when starting MAMP and the servers. Any doc root I set for "localhost" runs without problem. Any other server name I define, however, results in "cannot connect" when viewing in Chrome (not to be confused with "cannot find" - the browser is in fact following /etc/hosts to 127.0.0.1, but MAMP's Apache is simply not responding) I was wondering if anyone else has run into this issue or knows how to solve it. I'm working on some WordPress development and it keeps wanting to redirect to the base URL (with no port reference) even during setup. I'm sure I could fix things from the WP side but I'd rather figure out what the root issue is with MAMP. Thanks in advance for any insight you can provide.

    Read the article

  • How to migrate Fedora DS (389 DS) to a new machine?

    - by zengr
    Hello, I am trying to migrate a Fedora DS (1.2.2) to a new server (1.2.7.5). The process has been painful to say the least. The old server (1.2.2) was also an upgrade from an old fedora DS setup, so it does not contain migrate-ds-admin.pl. I found this question, but the URL does not open. I am aware that I need to use migrate-ds-admin.pl, but I am clueless. How do I use it? I assume this works like this: 1. Copy migrate-ds-admin.pl from server which has 1.2.7 to 1.2.2 2. Run migrate-ds-admin.pl to export the schema+ldif from 1.2.2 3. Import the schema+ldif to 1.2.7 using migrate-ds-admin.pl. If the above is true, then what parameters are need for export and import? Note: ./ldif2db -n NetscapeRoot -i /root/NetscapeRoot.ldif ./ldif2db -n userRoot -i /root/userRoot.ldif The above two commands work like a charm, but since the schema (custom schema) is not migrated, I see alot of errors during import.

    Read the article

  • Apache Server Status page in port 8443

    - by batman
    I'm very new to apache. I tried to enable the server status page of apache. I added the status.conf and status.load to mods-enabled directory. I changed the config of apache2.conf to include all mods-enabled directory. This is the config of staus.conf: <IfModule mod_status.c> # # Allow server status reports generated by mod_status, # with the URL of http://servername/server-status # Uncomment and change the "192.0.2.0/24" to allow access from other hosts. # <Location /server-status> SetHandler server-status Order deny,allow Deny from all Allow from 127.0.0.1 ::1 # Allow from 192.0.2.0/24 </Location> # Keep track of extended status information for each request ExtendedStatus On # Determine if mod_status displays the first 63 characters of a request or # the last 63, assuming the request itself is greater than 63 chars. # Default: Off #SeeRequestTail On <IfModule mod_proxy.c> # Show Proxy LoadBalancer status in mod_status ProxyStatus On </IfModule> </IfModule> The default settings. I restarted my server. I'm redirecting all ports to 8443. Which in turn turns my requests to localhost:8443/server-status. Which does throw an 404 error. Are there any way to get around this? Thanks in advance.

    Read the article

  • implementing NGINX loadbalancer

    - by Alaa Alomari
    I have two servers (ServerA 192.168.1.10, ServerB 192,168.1.11) and DNS of test.mysite.com is pointing to ServerA #in serverA i have this upstream lb_units { server 192.168.1.10 weight=2 max_fails=3 fail_timeout=30s; # Reverse proxy to BES1 server 192.168.1.11 weight=2 max_fails=3 fail_timeout=30s; # Reverse proxy to BES2 } server { listen 80; # Listen on the external interface server_name test.mysite.com; # The server name root /var/www/test; index index.php; location / { proxy_pass http://lb_units; # Load balance the URL location "/" to the upstream lb_units } location ~ \.php$ { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/test/$fastcgi_script_name; } } and ServerB is apache and it has the following <VirtualHost *:80 RewriteEngine on <Directory "/var/www/test" AllowOverride all </Directory DocumentRoot "/var/www/test" ServerName test.mysite.com </VirtualHost but whenever i try to browse test.mysite.com, it serves me from ServerA. also i tried to mark serverA and down server 192.168.1.10 down; in lb_units and still the same, serving me from serverA. any idea what i have done wrong??

    Read the article

  • apache virtualhost: Auto subdomain with exception

    - by Ineentho
    I've been searching for a way to automatically redirect domains to a specific folder, and fond a good answer here on serverfault: Apache2 VirtualHost auto subdomain. (The accepted answer) So far everything works good, however now I need to add an exception to this. The result I want is this: http://localhost/ --> E:/websites/ http://specialDomain2/ --> E:/websites/ http://normal1.com/ --> E:/websites/normal1.com/ http://normalDomain.com/ --> E:/websites/normal2.com/ I get the expceted result for the two last domains, but the localhost doesn't work. I copied the script from the question aboved, and tried to add something like <VirtualHost *:80> RewriteEngine On RewriteMap lowercase int:tolower # if already rewitten and we have the right path, stop right here RewriteRule ^(E:/websites/[^/]+/.*)$ $1 [L] RewriteRule ^localhost/(.*)$ E:/websites/$1 [L] # <-- Added this row RewriteRule ^(.+) ${lowercase:%{SERVER_NAME}}$1 [C] RewriteRule ^(www\.)?([^/]+)/(.*)$ E:/websites/$2/$3 [L,E=VHOST_ROOT:E:/websites/$2/] </VirtualHost> I thought this would make sense, since I would translate this to if URL = localhost/* Do nothing (because of the [L] flag), and use the default document root specified earlier else continue What's wrong with this? Thanks for any help!

    Read the article

  • Using <VirtualHost> over .htaccess for mod_rewrite

    - by DarkWolffe
    I have a LAMP stack installed on Ubuntu 12.10 with three sites created under /etc/apache2/sites-available, all of which are working. My problem lies in wanting to use those files over .htaccess for appending the .php file extension from the URL. My file currently stands as such: # The VGC <VirtualHost *:80> ServerAdmin [email protected] ServerName thevgc.net ServerAlias www.thevgc.net DocumentRoot /var/www/www <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/www/> Options Indexes +FollowSymLinks +MultiViews Includes RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ $1.php [L,QSA] AddType application/x-httpd-php .php AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> I'm almost certain I'm doing something wrong. All I know is that my .htaccess files refused to append the extension, or rather find the file that has the same name and load that file, so I wanted to go about this method. Any suggestions? Here is an example page from my site.

    Read the article

  • Exchange 2010 certificate errors

    - by Frederik Nielsen
    I have a problem with my newly setup Exchange environment for our hosted customers. First off, when configuring the outlook client, it gives a certificate warning although the certificate has been bought and setup. I am using a setup like this: autodiscover.CUSTOMERDOMAIN.TLD CNAME autodiscover.exchange.COMPANYDOMAIN.TLD (Companydomain is our company that hosts the exchange servers, customerdomain being the customers domain) Shouldn't that work? I know that Microsoft does something like that for Office365, but I really don't think they buy a certificate for every customer.. So I guess some redirection should be setup somehow - any guidance? Next thing: When we accept that error, and move on to actually starting Outlook, it states that the certificate is not valid for the RPC proxy server exchange.COMPANYDOMAIN.TLD - this domain is not right, as that domain is not included in the certificate. I would instead like this domain to be mail.exchange.COMPANYDOMAIN.TLD I tried to run this script setting both internal and external URL's to be the same, with no luck. Any guidance on this one? I am running Exchange 2010 SP2, with CAS, HT and MBX split up on 3 different servers.

    Read the article

  • Proxy auto-config dnsResolve doesn't seem to resolve subdomains

    - by HorusKol
    We're running on a Windows domain, and have a DNS to control computer names on our intranet. The following PAC works great for basic hostnames on our intranet - but we're setting up some subdomain-like names (example, redesign.buildbox), and it isn't resolving these. These subdomains are resolvable through other means (such as nslookup). Other than checking to see if the host has ".buildbox" or other domain, is there a way to make it work? Maybe I could try appending the Windows domain to host (can you concatenate strings in a PAC)? function FindProxyForURL(url, host) { // If IP address is internal or hostname resolves to internal IP, send direct. var resolved_ip = dnsResolve(host); if (isInNet(resolved_ip, "129.2.2.0", "255.255.255.128")) return "DIRECT"; if (isInNet(resolved_ip, "10.1.1.0", "255.255.255.0")) return "DIRECT"; if (isInNet(resolved_ip, "150.1.2.0", "255.255.255.248")) return "DIRECT"; // All other traffic uses below proxies, in fail-over order. return "PROXY 192.111.222.111:8080; DIRECT"; }

    Read the article

  • what means parameter -mailboxcredenctial

    - by cotablise
    H3llo, I am writing regarding the Exchange powershell commands. When I want to use following cmdlets, I have to insert parameter -mailboxcredential Test-OwaConnectivity Test-OutlookWebServices Test-ImapConnectivity Test-PopConnectivity In the Microsoft official site is written: "The MailboxCredential parameter specifies the mailbox credential for a single URL test." I am not sure why this parameter is needed... I inserted incorrect credentials, however the command was finished successfully... Could you tell me reason why this parameter is needed ? Example: Wrong/incorrect credential [PS] C:\>Test-WebServicesConnectivity -ClientAccessServer EXhub1 -MailboxCredential (Get-Credential blablabla) CasServer LocalSite Scenario Result Latency(MS) Error --------- --------- -------- ------ ----------- ----- EXhub1 Default-Fi... GetFolder Failure [System.Net.WebExcept... Without parameter: [PS] C:\>Test-WebServicesConnectivity -ClientAccessServer EXhub1 WARNING: Test user 'extest_91ef41d34eef4' isn't accessible, so this cmdlet won't be able to test Client Access server connectivity. Could not find or sign in with user ********\extest_91ef41d34eef4. If this task is being run without credentials, sign in as a Domain Administrator, and then run Scripts\new-TestCasConnectivityUser.ps1 to verify that the user exists on Mailbox server EXHUB1.****** + CategoryInfo : ObjectNotFound: (:) [Test-WebServicesConnectivity], CasHealthCouldN...edInfoException + FullyQualifiedErrorId : FB9A14B6,Microsoft.Exchange.Monitoring.TestWebServicesConnectivity WARNING: No Client Access servers were tested. Thank you in advance

    Read the article

  • jdbc4 CommunicationsException

    - by letronje
    I have a machine running a java app talking to a mysql instance running on the same instance. the app uses jdbc4 drivers from mysql. I keep getting com.mysql.jdbc.exceptions.jdbc4.CommunicationsException at random times. Here is the whole message. Could not open JDBC Connection for transaction; nested exception is com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was25899 milliseconds ago.The last packet sent successfully to the server was 25899 milliseconds ago, which is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem. For mysql, the value of global 'wait_timeout' and 'interactive_timeout' is set to 3600 seconds and 'connect_timeout' is set to 60 secs. the wait timeout value is much higher than the 26 secs(25899 msecs). mentioned in the exception trace. I use dbcp for connection pooling and here is spring bean config for the datasource. <bean id="dataSource" destroy-method="close" class="org.apache.commons.dbcp.BasicDataSource" > <property name="driverClassName" value="com.mysql.jdbc.Driver"/> <property name="url" value="jdbc:mysql://localhost:3306/db"/> <property name="username" value="xxx"/> <property name="password" value="xxx" /> <property name="poolPreparedStatements" value="false" /> <property name="maxActive" value="3" /> <property name="maxIdle" value="3" /> </bean> Any idea why this could be happening? Will using c3p0 solve the problem ?

    Read the article

  • How to get http requests details in a tcpdump?

    - by tucson
    I am trying to get a tcpdump trace of some http requests. Here is what I got so far (I replaced the real IP addresses with REMOTE and LOCAL): C:\>Windump -na -i 3 ip host REMOTE and ip src LOCAL and tcp port 80 Windump: listening on \Device\NPF_{8056BE5E-BDBB-44E6-B492-9274B410AD66} 13:13:34.985460 IP LOCAL.4261 > REMOTE.80: . 1784894764:1784894765(1) ack 1268208398 win 65535 13:13:38.589175 IP LOCAL.4302 > REMOTE.80: F 3708464308:3708464308(0) ack 982485614 win 65535 13:13:38.589285 IP LOCAL.4303 > REMOTE.80: F 890175362:890175362(0) ack 2462862919 win 65535 13:13:38.589330 IP LOCAL.4304 > REMOTE.80: F 1838079178:1838079178(0) ack 156173959 win 65535 13:13:38.589374 IP LOCAL.4305 > REMOTE.80: F 3952718843:3952718843(0) ack 2209231545 win 65535 13:13:38.589413 IP LOCAL.4306 > REMOTE.80: F 446105750:446105750(0) ack 3141849979 win 65535 13:13:38.590265 IP LOCAL.4302 > REMOTE.80: . ack 2 win 65535 13:13:38.590403 IP LOCAL.4304 > REMOTE.80: . ack 2 win 65535 13:13:38.590429 IP LOCAL.4303 > REMOTE.80: . ack 2 win 65535 13:13:38.590484 IP LOCAL.4305 > REMOTE.80: . ack 2 win 65535 13:13:38.590514 IP LOCAL.4306 > REMOTE.80: . ack 2 win 65535 But I do not get the following level of details: Request URL:http://domain.com/index.php Request Method:POST Status Code:200 OK POST /index.php HTTP/1.1 Host: domain.com Connection: keep-alive Content-Length: 151 Cache-Control: max-age=0 etc How can I get this level of data?

    Read the article

  • Windows Small Business System 2003. SQL timeout in Server Performance Report

    - by tetranz
    I'm the volunteer IT admin at a small school. We have SBS 2003 with about ten desktops. The server performance report is emailed to me daily. It is setup with a wizard in the Monitoring and Performance part of the "Server Management" console. It often fails with a "The page cannot be displayed" error. The event log shows Event Type: Error Event Source: ServerStatusReports Event Category: None Event ID: 1 Date: 1/16/2011 Time: 6:03:14 AM User: N/A Computer: ALPHA Description: Server Status Report: URL: http://localhost/monitoring/perf.aspx?reportMode=1&allHours=1 Error Message: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. Stack Trace: at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, TdsParserState state) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, TdsParserState state) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning() at System.Data.SqlClient.TdsParser.ReadNetlib(Int32 bytesExpected) [plus lots more stack trace] This has been happening for years :) I've never really solved it. It seems to be related to WSUS. When it happens, I run the Update Services "Server Cleanup Wizard". That takes a long time to run. If I haven't run it for a while it can take 10 hours. I also run the WsusDBMaintenance.sql script (from TechNet I think) which reindexes the database etc. Those two things seem to get it working again for a while. Recently the "while" has become a couple of weeks. My searching online has revealed lots of people having this problem but no real solution. Does anyone have any good ideas about this? I have to wonder if something in the WSUS SQL schema is not indexed properly. The time that the server cleanup wizard takes seems ridiculous. Thanks

    Read the article

  • Apache, Tomcat and mod_jk for load balancing

    - by pHk
    Hi guys. I've set-up a basic Apache (2.2.x) and Tomcat (6.0.x) set-up using mod_jk for load balancing using the worker.properties file. Preliminary testing seems to show that this works relatively well, and it was quite easy to set-up. However; the fact that it was so easy to set-up has got me a little worried. We're dealing with 100 - 300 concurrent users using the same web application (deployed on 2 or 3 Tomcat instances). I have done a little Googling and looking around on here and there seems to be more than 1 way to accomplish this (one example on here used a balancer:// style URL, which I've never seen before in an Apache config). For example, one question I ask myself is how reliable the load detection on mod_jk really is (Busyness, Session, Request, etc). In your experience, does this set-up prove to be reliable in real world scenarios? Any pointers on improvements, pit falls or interesting literature/articles? I've worked with Apache before, but am in no way an expert. Thanks in advance.

    Read the article

< Previous Page | 639 640 641 642 643 644 645 646 647 648 649 650  | Next Page >