Search Results

Search found 37688 results on 1508 pages for 'site search'.

Page 523/1508 | < Previous Page | 519 520 521 522 523 524 525 526 527 528 529 530  | Next Page >

  • Identify Long Running or Slow PHP Scripts

    - by Kirk
    I have web server that is getting around 25K visits a day up at yougetsignal.com. Sometimes the site feels a bit sluggish. I am hosting it on nginx with php5-fpm. Is there a way for me to see a list of all of the long running requests that are coming to the site? I'd love to have a real-time list of all of the active requests that PHP is handling and how long they have been running. Kind of like top, but just for the web server. This would let me know how long requests are taking and which script is the culprit. Anyone have any ideas on how I can do this?

    Read the article

  • Lightweight, low cost enterprise backup solution

    - by Scott
    Looking for a backup solution primarily for Windows clients (XP/7), that will either back up to 2 different servers (1 on site, 1 off site - internet - can be our own server), or back up to 1 server and then we would need to somehow backup that server offsite/internet. By lightweight, I mean the backup client software should not eat up much memory and processor since some of the client machines are older. I am used to using Crashplan for home use - the pricing is nice for the amount of backup I get, and it works great / easy to install and get going - I can back up to my own machines locally and over the net. However, the price is going to be a little steep for enterprise level backup, 1500+ machines. Possibly ZManda and Bacula are good choices to consider? Are they light weight? Can the clients/agents be set to go over the net and/or multiple backup servers?

    Read the article

  • command line find/replace help

    - by Chrisbloom7
    I've got a set of 5000+ files that I need to do a simple search and replace in. I have been doing it in a text editor (EditPlus) by opening 500 files at a time, doing a global search/replace, saving all, closing, etc. But, that's taking literally hours to do and it's boring and tedious and I already have done it once today and need to do it again because all the files got refreshed. Is there a way to do this via the Bash command line? Here's the details: Find onchange="document.location ='/products/view.html/view/'+this.value" Replace it with onchange="alert('Not implemented')" style="display: none" All of the files have a .HTM extension, but they are nested in several sub directories.

    Read the article

  • how do i install intermediate certificate

    - by getmizanur
    I have installed private key (pem encoded) and public key certificate (pem encoded) on amazon load balancer however when i check the ssl with site test tool (http://www.networking4all.com/en/support/tools/site+check/), i get the following error Error while checking the SSL Certificate!! Unable to get the local issuer of the certificate. The issuer of a locally looked up certificate could not be found. Normally this indicates that not all intermediate certificates are installed on the server. i converted crt file to pem using these command from this tutorial openssl x509 -in input.crt -out input.der -outform DER openssl x509 -in input.der -inform DER -out output.pem -outform PEM during setting up of amazon load balancer only option i left out was certificate chain (pem encoded) however this was optional. could this be cause of my issue? and if so i how do i create certificate chain? for the last question i have tried googling however i'm getting more confused than before. please help many thanks in advance. UPDATE @all thanks for the helpful advice. if you make request to verisign they will give you a certificate chain however this chain includes public crt, intermediate crt and root crt. make sure to remove the public crt from your certificate chain (which is the top most certificate) before adding it to your certification chain box of your amazon load balancer. if you are making https request from an android app then above instruction may not work for older android os such as 2.1 and 2.2. to make it work on older android os [https://knowledge.verisign.com/support/ssl-certificates-support/index?page=content&id=AR657&actp=LIST&viewlocale=en_US]. on this link click on "retail ssl" tab and then click on "secure site" "CA Bundle for Apache Server". copy and past these intermediate certs into certificate chain box. just incase if you have not found it here is the direct link [https://knowledge.verisign.com/support/ssl-certificates-support/index?page=content&id=AR1409] if you are using geo trust certificates then solution is much the same for android devices however you need to copy and past their intermediate certs for android. PS: sorry for the long urls however "new users can only post a maximum of two hyperlinks"

    Read the article

  • Mac OS X 10.5/6, authenticate against by NIS or LDAP when both servers have your username

    - by Wang
    We have an organization-wide LDAP server and a department-only NIS server. Many users have accounts with the same name on both servers. Is there any way to get Leopard/Snow Leopard machines to query one server, and then the other, and let the user log in if his username/password combination matches at least one record? I can get either NIS authentication or LDAP authentication. I can even enable both, with LDAP set as higher priority, and authenticate using the name and password listed on the LDAP server. However, in the last case, if I set the LDAP domain as higher-priority in Directory Utility's search path and then provide the username/password pair listed in the NIS record, then my login is rejected even though the NIS server would accept it. Is there any way to make the OS check the rest of the search path after it finds the username?

    Read the article

  • Odd Suhosin memory alerts

    - by slice
    I am getting a lot of odd suhosin alerts in my syslog. The following are example entries: Jun 9 08:46:11 suhosin[9764]: ALERT - script tried to increase memory_limit to 2145386496 bytes which is above the allowed value (attacker '157.55.39.180', file '/var/www/site/index.php') Jun 9 08:46:11 suhosin[9744]: ALERT - script tried to increase memory_limit to 2145386496 bytes which is above the allowed value (attacker '109.74.2.136', file '/var/www/site/test.php') Jun 9 08:46:13 suhosin[9779]: ALERT - script tried to increase memory_limit to 0 bytes which is above the allowed value (attacker 'REMOTE_ADDR not set', file 'unknown') Jun 9 08:46:13 suhosin[9779]: ALERT - script tried to increase memory_limit to 2145386496 bytes which is above the allowed value (attacker 'REMOTE_ADDR not set', file 'unknown') What is happening here? Why 0 bytes or 2145386496 bytes (2046 GB!!??)? Why does it sometimes state the attacker and the requested script and sometimes state 'REMOTE_ADDR not set' and file 'unknown'? How do I proceed to figure this out?

    Read the article

  • How can I simulate a slow machine in a VM?

    - by Nathan Long
    I'm testing an AJAX-heavy web-application. I develop on a new Mac, but I use VmWare Fusion (currently 3.1.2) to test in Windows XP, using IETester to simulate older versions of IE. This lets me see how older IE versions would render the site, but I'd also like to see how the site would perform on an older machine. I see in the VM's settings that I can decrease the RAM; is there a way to also dial down the processor speed? How else might I simulate a slow machine? (I am also going to check out how to simulate a slow internet connection.)

    Read the article

  • Nginx server_name is set to mydomain.com, so why is www.mydomain.com getting served too?

    - by Lorenz Forvang
    I have my Nginx conf set up as follows: server { listen 443 ssl; server_name mydomain.com; ... } When I load https://mydomain.com, the site loads fine. But when I load https://www.mydomain.com, the site loads as well. Why is this happening? I set up the DNS records using Amazon Route 53 as: A mydomain.com xxx.xxx.xxx.xxx (IP) CNAME www.mydomain.com mydomain.com So is a request to www.mydomain.com arriving at Nginx as a request to mydomain.com? If so, how do I differentiate requests to www.mydomain.com and mydomain.com at my server?

    Read the article

  • What is the most relevant statistic information from Google Analytics for website optimalisation? [c

    - by Paulo
    I was wondering, what is considered the best statistic information from Google Analytics for website optimalisation? Example 1: Alot of visitors leave the website on a specific page, this could be because there aren't enough forwards / links on that page so that the visitors don't see a reason to continue their travel on the website. Example 2: Your traffic sources are 70% direct visits, 28% redirects from other websites and 2% by search engine. You could say that your website is not easily findable in search engines which could be interpreted as a sign that your keywords and/or meta description aren't good enough to be found. I'd like to hear some from other people!

    Read the article

  • New XAML-Based User Group started in Birmingham, Alabama.

    - by mbcrump
    I’m pleased to announce that a new XAML-Based User Group has started in Birmingham, Alabama. The group is being hosted by Michael Crump and Jonathan Marbutt. The reason is very simple: we feel that Birmingham needs a fresh start. We are both very passionate about .NET and are hoping to share our experiences with the community. We have created a site (http://allaboutxaml.net) and our first meetup is now scheduled. We are here to discuss all things that have to do with xaml. This includes Silverlight, WPF, Windows Phone 7, Surface, Lightswitch and Silverlight with SharePoint 2010. We: believe strongly in xaml and are passionate about what we talk about. believe in an open-forum. believe a user group should be FUN as well as educational. do not claim to be an expert on anything, we are here to share. record every presentation and put it on the web for others to benefit. meet monthly but are flexible on the actual date. do our events outside of technical colleges. have families and our meetings start and stop on time. welcome new speakers. welcome adult beverages. Our first meeting is going to be on February 15th, 2011 (6PM), at Logan's on HWY 280. It is going to be more of a meetup than a meeting. I would like to discuss current xaml-based projects that you are working on and get to know one another. If you are interested in coming then please sign up here so that we know how many to expect. Please visit our site to find out more about us: http://allaboutxaml.net.

    Read the article

  • Usual Suspects: Typical 3rd Party Entities in E-Commerce [closed]

    - by zharvey
    I am doing some requirements/analysis for a web app that I'd like to build (Ruby/Java developer here). This web app would have a store front, shopping cart and would need to be totally compliant with all e-com best practices. It's amazing how much non-technical info comes up when you search for phrases like "how does e-commerce work", but very little comes up in the way of technical details. As such, I'm having extreme frustration finding answers to what I consider pretty straight-forward questions. I came here because I believe this question is not off-topic; if it is, please leave a comment as to why this question does not belong here and I will happily remove it myself (upvotes if your comment can point me to the correct place for this question!). So then: What 3rd parties will I need to work with to have a modern, web-compliant e-com site? So far I can account for a payment gateway provider like Authorize.net and an SSL certificate provider like Trustwave. Any others? What other standards besides PCI compliance will I be held to (besides governing laws, of course!)? Vulnerability scans: PCI compliance requires quarterly scans: if I'm a "Level 4" (low volume) Merchant does that still apply to me? Irregardless, my backend architecture is quite huge, with web servers, app servers, database, message brokers and more. Do each of these servers need to be scanned?!? If not what servers do need to get these quarterly scans? I usually hate to ask micro-questions inside of one large one, but these are so closely-related I just felt like asking them all separately would be spamming the site with too many petty questions. Thanks in advance!

    Read the article

  • Server 2008 R2 Remote Desktop Gateway Role and IIS7

    - by user137466
    I am attempting to setup a RD Gateway for a client. When I first set it up I noticed that IIS did not have the 'Defualt Web Site' so I created it and assigned it an id of 1 and set the bindings to port 80 and 443. I then re installed the RD Gateway role with the idea that it would then configure IIS correctly. It did not. How would I go about making sure a re install of the Remote Desktop Gateway role configures IIS correctly? I cannot re install IIS as there is a site that is already on there that I cannot take down

    Read the article

  • Connecting to FileZilla FTP Server on XAMPP localhost with Dreamweaver

    - by Keyslinger
    I am running XAMPP 1.7.4. I've installed the FileZilla FTP server and I'm running it as a service. I created a user and when I connect as that user with the FileZilla client, I have no trouble connecting. However, in Dreamweaver CS5 when I create a site using the Manage Sites dialog and go to configure its FTP settings, I get a message that reads An FTP error occurred - cannot make connection to host. and goes on to list some possible causes and solutions. I have tried setting the connection as passive to no avail. I understand that it is not necessary to use FTP when I am editing the site locally, but it is for a Dreamweaver class I am teaching and I want my students to learn to use Dreamweaver's FTP tools. How can I make my localhost FTP connection work in Dreamweaver?

    Read the article

  • Server problem (duh)

    - by j-t-s
    Sorry the title couldn't be more specific. I installed Abyss Web Server. I'm running Windows XP Home Edition and I have Wireless Mobile Broadband internet. I used to be able to access (and other people on other networks) my site by entering my ip address in the browser, but after I formatted, and the installed abyss web server again, this does not work anymore. There are no errors. I CAN visit my own site by entering my ip address BUT anybody else can't do the same, it just says "connecting" in the browser's statusbar and it never changes. I have consulted the docs and have found no help. Google hasn't helped with this problem either. Can somebody please help? Thank you :)

    Read the article

  • Server problem (duh)

    - by j-t-s
    Sorry the title couldn't be more specific. I installed Abyss Web Server. I'm running Windows XP Home Edition and I have Wireless Mobile Broadband internet. I used to be able to access (and other people on other networks) my site by entering my ip address in the browser, but after I formatted, and the installed abyss web server again, this does not work anymore. There are no errors. I CAN visit my own site by entering my ip address BUT anybody else can't do the same, it just says "connecting" in the browser's statusbar and it never changes. I have consulted the docs and have found no help. Google hasn't helped with this problem either. Can somebody please help? Thank you :)

    Read the article

  • Finding the shortest path through a digraph that visits all nodes

    - by Boluc Papuccuoglu
    I am trying to find the shortest possible path that visits every node through a graph (a node may be visited more than once, the solution may pick any node as the starting node.). The graph is directed, meaning that being able to travel from node A to node B does not mean one can travel from node B to node A. All distances between nodes are equal. I was able to code a brute force search that found a path of only 27 nodes when I had 27 nodes and each node had a connection to 2 or 1 other node. However, the actual problem that I am trying to solve consists of 256 nodes, with each node connecting to either 4 or 3 other nodes. The brute force algorithm that solved the 27 node graph can produce a 415 node solution (not optimal) within a few seconds, but using the processing power I have at my disposal takes about 6 hours to arrive at a 402 node solution. What approach should I use to arrive at a solution that I can be certain is the optimal one? For example, use an optimizer algorithm to shorten a non-optimal solution? Or somehow adopt a brute force search that discards paths that are not optimal? EDIT: (Copying a comment to an answer here to better clarify the question) To clarify, I am not saying that there is a Hamiltonian path and I need to find it, I am trying to find the shortest path in the 256 node graph that visits each node AT LEAST once. With the 27 node run, I was able to find a Hamiltonian path, which assured me that it was an optimal solution. I want to find a solution for the 256 node graph which is the shortest.

    Read the article

  • How can I whitelist a user-agent in nginx?

    - by djb
    I'm trying to figure out how to whitelist a user agent from my nginx conf. All other agents should be shown a password. In my naivity, I tried to put the following in before deny all: if ($http_user_agent ~* SpecialAgent ) { allow; } but I'm told "allow" directive is not allowed here (!). How can I make it work? A chunk of my config file: server { server_name site.com; root /var/www/site; auth_basic "Restricted"; auth_basic_user_file /usr/local/nginx/conf/htpasswd; allow 123.456.789.123; deny all; satisfy any; #other stuff... } Thanks for any help.

    Read the article

  • Update saved password for basic authentication using a script

    - by Kalamane
    I have a website that uses basic authentication as described on this webpage. Each of the computers I manage have the password saved in their browser. There is only one username and password for this. After someone logs in to the site this way, they are presented with their individual username and password prompt as part of the web page. The purpose of the initial username/password is to discourage non-technical employees that aren't supposed to be using the page from even viewing it. So far, when we've had to change this password, I've manually gone to each computer and updated the saved password. I'm writing a startup script to configure other aspects of these systems so that I can maintain them easier. I'd like to be able to update the saved password via this script. The operating system running on these machines is Windows XP SP3 and the browsers they're using to access this site are IE8 and IE9. How can I update the saved basic authentication information for a website via a script?

    Read the article

  • Why will Screenshot not show up in Dash?

    - by Bruce
    I'm relatively new to Ubuntu, and am running into a problem I've not seen before. This is on an Ubuntu 12.04 system. I can't find Screenshot. I'm trying to take a shot of a screen from a web page, go to Dash, type in Screenshot, and nothing shows up. I've done this many times before, but suddenly Screenshot is not found. I downloaded an app called Shutter, it seems to have installed just fine, but like Screenshot, I don't know how to find it. Dash reveals nothing. In Dash, I've noticed that I have no applications showing up in response to a search. For instance, I can type Firefox, and get no results either, even though I'm using Firefox to submit this question. Is there something I can do to get Dash to show applications? Search results show only files and folders, and downloads. I did find one answer on Ask Ubuntu, which gave the following instructions: None of these fixes worked for me. But after more searching: rm ~/.cache/software-center -R worked like a charm. I did need to run: unity --reset & afterwards though, for the changes to take effect within dash, but the software center just started working straightaway. However, when I issued the unity --reset & command, it resulted in an internal error, and I'll have to reboot in order to reset my computer (which fortunately is allowing me to complete this question). So I'm still wondering why applications won't show in Dash, and will appreciate any help.

    Read the article

  • Is a 302 redirect to a random URL from the homepage an SEO problem?

    - by CookieMonster
    I originally posted this on Stackoverflow, but I believe here is a better place to ask. My web application is very similar to notepad.cc which redirects to a randomly generated URL upon access, e.g. http://myapp.com/roTr94h4Gd. (Please note that notepad.cc is not my site.) Probably because of this redirect feature, when I do "fetch as Google" or "fetch as Bingbot", I get a 302 and no html content. Not even a <html></html> tag. HTTP/1.1 302 Moved Temporarily Server: nginx/1.4.1 Date: Tue, 01 Oct 2013 04:37:37 GMT Content-Type: text/html Transfer-Encoding: chunked Connection: keep-alive X-Powered-By: PHP/5.4.17-1~dotdeb.1 Set-Cookie: PHPSESSID=vp99q5e5t5810e3bnnnvi6sfo2; expires=Thu, 03-Oct-2013 04:37:37 GMT; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Location: /roTr94h4Gd How should I avoid 302 in this case? I suppose I could modify my site to prevent the redirect, but it is a necessary feature of my web app to generate a random URL on each access. I added <meta name="fragment" content="!"> tag into my index page and set it to return a static snapshot of my page when the flag is set. But this still returns a 302. I also added a header to return 200 before redirecting, but this had no effect, either. Could someone tell me a good suggestion to solve this problem?

    Read the article

  • Why does Windows share permissions change file permissions?

    - by Andrew Rump
    When you create a (file system) share (on windows 2008R2) with access for specific users does it changes the access rights to the files to match the access rights to the share? We just killed our intranet web site when sharing the INetPub folder (to a few specified users). It removed the file access rights for authenticated users, i.e., the user could not log in using single signon (using IE & AD)! Could someone please tell me why it behaves like this? We now have to reapply the access right every time we change the users in the share killing the site in the process every time!

    Read the article

  • Ubuntu getting wrong hostname from DHCP

    - by sam
    When provisioning new Ubuntu Precise (12.04) servers, the hostname they're getting seems to be generated from the DNS search path, not a reverse lookup on the hostname. Take the following configuration BIND is configured with the hostname, and reverse name Normal zone $TTL 600 $ORIGIN srv.local.net. @ IN SOA ns0.local.net. hostmaster.local.net. ( 2014082101 10800 3600 604800 600 ) @ IN NS ns0.local.net. @ IN MX 5 mail.local.net. my-new-server IN A 10.32.2.30 And reverse @ IN SOA ns0.local.net. hostmaster.local.net. ( 2014082101 10800 3600 604800 600 ) @ IN NS ns0.local.net. $ORIGIN 32.10.in-addr.arpa. 30.2 IN PTR my-new-server.srv.local.net. Then DHCPD is configured to hand out static leases based on mac addresses like so subnet 10.32.2.0 netmask 255.255.254.0 { option subnet-mask 255.255.254.0; option routers 10.32.2.1; option domain-name-servers 10.32.2.1; option domain-name "util.of1.local.net of1.local.net srv.local.net"; site-option-space "pxelinux"; option pxelinux.magic f1:00:74:7e; if exists dhcp-parameter-request-list { option dhcp-parameter-request-list = concat(option dhcp-parameter-request-list,d0,d1,d2,d3); } group { option pxelinux.configfile "pxelinux.cfg/pxeboot"; host my-new-server { fixed-address my-new-server.srv.local.net; hardware ethernet aa:aa:aa:bb:bb:bb; } } } So the hostname should be my-new-server.srv.local.net, however when building a Ubuntu 12.04 node, the hostname ends up as my-new-server.util.of1.local.net When building Lucid (10.04) hosts, the hostname will be correct, it's only on Precise/12.04 nodes we have the problem. Doing a normal and reverse lookup on the host and IP returns the correct result Sams-MacBook-Pro:~ sam$ host my-new-server my-new-server.srv.local.net has address 10.32.2.30 Sams-MacBook-Pro:~ sam$ host my-new-server.srv.local.net my-new-server.srv.local.net has address 10.32.2.30 Sams-MacBook-Pro:~ sam$ host 10.32.2.30 30.2.32.10.in-addr.arpa domain name pointer my-new-server.srv.local.net. The contents of the hosts file is incorrect too 127.0.0.1 localhost 127.0.1.1 my-new-server.util.of1.local.net of1.local.net srv.local.net my-new-server So it looks like when it creates the hosts file, it puts the entire contents of the DNS search path into the local address so the FQDN according to the server is the short hostname as defined, then the first domain in the search path. Is there a way to get around this behaviour, or fix this so it gets the hostname correctly? It's picking up the first part of the hostname, then the rest is wrong.

    Read the article

  • Recommendation for a non-standard SSL port

    - by onurs
    Hey guys, On our server I have a single IP, and need to host 2 different SSL sites. Sites have different owners so have different SSL certificates, and can't share the same certificate with SAN. So as a last resort I have modified the web application to give the ability to use a specified port for secure pages. For its simple look I used port 200. However I'm worried about some visitors may be unable to see the site because of their firewalls / proxies blocking the port for ssl connections. I heard some people were unable to see the website, a home user and someone from an enterprise company, don't know if this was the reason. So, any recommendations for a non-standard SSL port number (443 is used by the other site) which may work for visitors better than port 200 ? Like 8080 or 8443 perhaps? Thanks!

    Read the article

  • Do I need a VPN to secure communication over a T1 line?

    - by Seth
    I have a dedicated T1 line that runs between my office and my data center. Both ends have public IP addresses. On both ends, we have a T1 routers which connect to SonicWall firewalls. The SonicWalls do a site-to-site VPN and handle the network translation, so the computers on the office network (10.0.100.x) can access the servers in the rack (10.0.103.x). So the question: can I just add a static route to the SonicWalls so each network can access each other with out the VPN? Are there security problems (such as, someone else adding the appropriate static route and being able to access either the office or the datacenter)? Is there another / better way to do it? The reason I'm looking at this is because the T1 is already a pretty small pipe, and having the VPN overhead makes connectivity really slow.

    Read the article

  • Plesk directory structure problems

    - by johnnietheblack
    I have an entire website with the following directory structure: /example.com /html (public) /css /js index.php /lib session.php other_lib_files.php /views index.php /models /controllers As illustrated, the html is public, and anything above it is private. My site now needs to upgrade servers, and the new server (Linux w/ Plesk) has the following structure (reduced to the problematic parts below): /myplesksite.com /httpdocs /css /js index.php /private /lib /models /views What I would THINK is that I should be able to put my /lib, /views, /models, etc in the directory directly above /httpdocs, the same way I had it in my previous server. Is that possible? Or do I have to put it in private? I would really love not to have to adjust my internal paths throughout the site if not necessary...

    Read the article

< Previous Page | 519 520 521 522 523 524 525 526 527 528 529 530  | Next Page >