Search Results

Search found 44204 results on 1769 pages for 'web designer'.

Page 955/1769 | < Previous Page | 951 952 953 954 955 956 957 958 959 960 961 962  | Next Page >

  • Strange issue with 74.125.79.118

    - by Domenic
    I'm facing with a strange issue on a Linux server. After frequent crashes the analysis found that the server is led to collapse by a huge number of connections to the ip 74.125.79.118 departing from php scripts of the hosted web sites. After a depth analysis of the files I'm found that are not present any malware infections. Ip 74.125.79.118 is Google. I realize after a Google search that the connections to this ip are generated by embedded video from youtube on web sites, among other Google features like safe search. But I don't understand how this type of behavior can lead to the collapse the server and the uniqueness of the situation leads me to think that the situation is far from being attributable only to Google and Youtube. Also I've found that blocking connections from eth0 to 74.125.79.118:80 doesn't solve the issue but if I stop DNS traffic from eth0 to internet, connections to 74.125.79.118 stops. I'm really confused about this. Any suggestions? Cheers.

    Read the article

  • when to upgrade server to include more cores, versus more processors, versus additional server?

    - by gkdsp
    The server hosting market is separated into single, double, qual, etc., processors, where each processor has several cores, or CPUs. My company will offer a Linux-based web application that relies on an Apache web server and a middle tier for business logic. The middle tier is used to crunch math, and return result to a client. Many clients may access the application simultaneously. The company will start with one processor having 4 cores. I'm trying to understand how the app uses the cores and then how to scale the application as business grows, in terms of servers/processors/cores. For example, I'd assume initially one core would be used for Apache, and the other 3 used to process client's requests for math crunching... Question 1: does that mean, with the 3 cores available, I can handle 3 separate client requests simultaneously (e.g. 1 for each of 3 cores)? I mean, except for the shared RAM, is this effectively like having 3 individual machines (from pt of view or processing client requests simulaneously)? Or, only one client's request may be processed at any one time, but that client's request is divided up into up to 3 cores depending on the type of process running that does the math crunching and whether or not it can take advantage of multi threading (so the # of cores impacts how fast any one client request completes)? I'm confused about what the cores mean to the application here. Question 2: As the business grows and more client requests need to be processed, should the server be upgraded to (A) a new machine with more cores, (B) a new machine with two processors, 4 cores each, or (C) keep the original server and add another server with a single processor? Which route provides the most efficient way to scale the application, in terms of processing more client requests per time interval? Is the choice, for example, limited by RAM (when you need more RAM than box can handle it's time to add another server), or something else? Question 3: Is the total number of client requests processed simultaneously equal to the number of cores times the number of servers (minus the one core for Apache)?

    Read the article

  • Can't Connect SQL server - process being used by another process. Conflict with IIS?

    - by shinya
    I'm having problem connecting to MS SQL Server (2012 Express) after accessing a database through IIS (web site). I can access the data through web site no problem, but I can't access the data from any other programs (i.e SSMS) until I reboot the SQL server. It seems that the connection stays open even if I close a browser. Here is error message I'm getting Unable to open the physical file "C:---------". Operating system error 32: "32(The process cannot access the file because it is being used by another process.)". Unable to open the physical file "C:-------". Operating system error 32: "32(The process cannot access the file because it is being used by another process.)". Cannot open user default database. Login failed. Login failed for user 'Myserver\myname'. (.Net SqlClient Data Provider) Server Name: MYPC\SQLEXPRESS Error Number: 5120 Severity: 16 State: 101 Line Number: 65536 I follow the help link and it told me to move TCP before named pipes in the protocol order list. I tried it but it didn't help at all. What is the proper settings on SQL server or IIS in order to release process after closing a browser. How do I avoid getting this error? Thank you for your help

    Read the article

  • Recommendations for good Unix MTA / groupware solutions? [closed]

    - by Jez
    Possible Duplicate: Exchange server replacement that runs on Linux I'm setting up a Debian server, and one of the things I need on it is an MTA. I don't want to use something like Exim or Postfix because I want something that ties in SMTP, POP3, and IMAP all in one (a la Microsoft Exchange). Most MTAs also seem to be hellishly difficult to configure. Try and read the Exim documentation; you could do a university degree on it (I'm not kidding). When you can get an HTTP server like Cherokee which is easy to configure and has a nice web interface, do MTAs or groupware solutions need to be that hard? I'm aware that some people think "the Unix way" is to have lots of different interacting pieces of software (like maybe an SMTP MTA, POP3 service, webmail service, and overarching manager to tie them all together), but I think this is a situation where that just makes things a lot harder to deal with and one large software suite fits in much more nicely. So, I'm looking for good open source software suites that will run on Debian that: Combine (at least) SMTP, POP3, and IMAP Are easy(ish) to configure Have a nice configuration web interface or GUI Are not defunct projects I don't mind if it's groupware and offers calendaring too, but I would only be using the e-mail functionality for now. Another nice-to-have would be built-in webmail (if we're combining a bunch of functionality, why not?) Note however that I do NOT need Outlook support. I am not really looking for an "Exchange replacement drop-in". The suites I've found so far that seem to match the above criteria (and have appropriate licenses) are Citadel, Kolab, and Zimbra. I'd appreciate anyone who has experience with any of these giving me the pros and cons of them, such as how easy they are to configure and what their performance is like. I'd also appreciate any other suggestions for solutions that fulfil my criteria that I may have missed out.

    Read the article

  • How to Access User Directory shared by Apache on OS X Mountain Lion?

    - by schluchc
    When trying to access the local user web page on localhost/~username, I get a "403 Forbidden". The system web page in /Library/WebServer/Documents is accessible on localhost/ though, so I assume Apache is working fine. I know that this problem has been discussed several times, also on superuser. I implemented and checked all I could find, but I still couldn't solve the problem and would be glad if someone had a suggestion for this particular case: sudo apachectl -t returns Syntax OK. I have a username.conf file in /etc/apache2/users/: <Directory "/Users/username/Sites/"> Options Indexes MultiViews FollowSymLinks AllowOverride AuthConfig Limit Order allow,deny Allow from all </Directory> as proposed here [SuperUser] and in several other tutorials. The permissions of the username.conf file are -rw-r--r-- root wheel, as they should be. The httpd.conf is unchanged and therefore contains the line Include /private/etc/apache2/extra/httpd-userdir.conf. That file in turn contains UserDir Sites Include /private/etc/apache2/users/*.conf <IfModule bonjour_module> RegisterUserSite customized-users </IfModule> So the httpd*.conf files should be ok. The permissions of /Users/username/Sites is drwxr-xr-x 10 username staff and -rw-r--r--@ 1 username staff for the index.html. In the error log I simply get a [Sun Nov 25 22:14:32 2012] [error] [client 127.0.0.1] (13)Permission denied: access to /~username/ denied. And yes, after each change I did the sudo apachectl restart. Any help no how to solve the problem or how to further analyze it would be highly appreciated!

    Read the article

  • Recovering corrupted VB.NET Form file?

    - by Omega
    Good day. This question is directly related to this one I made here: http://stackoverflow.com/questions/4911099/there-is-no-editor-available-for-form1-vb-error There, I was working on my VB.NET 2010 Express application, I saved, then a blackout came and now, apparently, I can't view the designer nor code of my Form file (Form1.vb). On StackOverflow, I was recommended to check for the From1.vb file, and try to open it on Notepad. If nothing appeared, it would mean that my file was corrupted. I open it on Notepad, and I get a blank file. It is 27kb, but it only has blank spaces. So I assume it is corrupted. I was told this place was better for dealing with corrupted files, about techniques to recover them. I use Windows7, VB.NET 2010 Express. I run Windows7 on Parallels Desktop, Mac OS X. However, I do not believe that is the problem, most likely it was that damned blackout... this is the first time that happens to me. VB.NET worked just fine for me all time (about a month and half). Thank you.

    Read the article

  • What causes PHP pages to consistently download instead of running normally

    - by Jonathan
    Hi, I'm running a Ubuntu Server on a VM, to test out different web forum solutions. I have set up a ~/public_html/ to be accessible with the apache2 web server, and that works fine. However when I go to a .php file on a browser (using my VM's ip-address/~username/phpfile.php) it does not display it as it should. Instead it offers to save to file/asks what program to open it with. Interestingly though that dialog box does recognise that it is a php file. I have the following version of php installed on the system: PHP 5.3.2-1ubuntu4.5 with Suhosin-Patch (cli) (built: Sep 17 2010 13:49:46) Copyright (c) 1997-2009 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2010 Zend Technologies And the following server: Server version: Apache/2.2.14 (Ubuntu) Server built: Nov 18 2010 21:19:09 If anyone knows what might be causing this/potential solutions it would make me very happy :) EDIT: Turns out files this behaviour was only apparent on files in the ~/public_html/ directory. All php files in /var/www/ work fine. Prizes go to whoever can explain why? :D (And by prizes I just mean a well done, no actual prizes I'm afraid.)

    Read the article

  • Setting a subdomain to access home machine with windows remote desktop

    - by ianhales
    I'm trying to remotely connect to home machine through Windows Remote Desktop (amongst other things, but this is currently my primary focus). I can do this fine using my home WAN's static IP (thank god for cable!) with port-forwarding, but I would like to access it from a subdomain of my web-site (e.g. home.mydomain.co.uk). In the cPanel for my hosting account, I've gone into DNS zones and altered the A-record to point to my WAN's IP, which I thought should do the job, but I still cannot connect. When I ping the subdomain, I get my web-host's IP, which I guess is to be expected as I believe the DNS of the host domain is used first, then my server handles the redirection of traffic to the IP in the A-record. Is this the correct idea? Do A-record changes suffer from the same propagation delays as DNS record changes, as I suppose that could explain it? (by the way, this thread confirms my thoughts that setting the A-record should be enough: Hostmonster Subdomain redirected to home server IP: How to ssh into home server using subdomain)

    Read the article

  • IIS 7 much slower than IIS 6

    - by JoeJoe
    I have a asp.net 3.5 web application running fine on Windows2003 IIS6. I published same exact application to IIS7.5 (Win2008R2) on a faster box (i5,8Gram) and it is significantly slower. 5-6 sec per page vs. 1-2 sec per page. During that time the Task Mgr CPU is always under 10%. Both attach to same database on other box. Benchmark is consistent from any other client browser or machine. I have connection pool on both, compression on both. Same network subnet. Forms authentication (no SSL yet). Can you give me steps on how to troubleshoot where the delays are being inserted or settings in IIS7 that I may have overlooked. Just using defaults. There is only 1 web site on each box. I understand the roles of an Application as defined in IIS has changed. There is no special Application defined in IIS.

    Read the article

  • Recommend a UK based VPS host equivalent to Dreamhost [closed]

    - by Pez Cuckow
    I appreciate this question could be considered subjective and argumentative so can people make recommendations rather than arguing about the best. I believe the "correct" answer is the one closest to what I am looking for. Basically I live in the UK but have been using the US based Dreamhost for about 6 years now, and my web projects are getting to the scale where the websites need to the UK based to cope with the demand and load. I originally had shared hosting with Dreamhost but upgraded to a VPS a while ago, getting 512mb of RAM, unlimited disk space, bandwidth and domains for $30. Their control panel is a custom easy to use build that they have created in house and offers features very similar to other web panels (as far as I am aware). So basically my question boils down to, is there anywhere that offers an equivalent package? In all honesty as long as I have over 50gb HDD space and unlimited domains it doesn't really matter? Are there any VPS providers you would recommend as reliable? I promise to check every link posted, many thanks for your time!

    Read the article

  • Google Chrome Browser

    - by Harish
    Hi friends. Am using Google Chrome as my default web browser. I don't have any problem with it. The only problem rise is when I enter gmail.com and login into my account. I need to go to Histories in Google chrome (ctrl + shft + del) and select "Del Cokies and Other datas" for entering into gmail again. My gmail page is workin just once. I nedd to log in. Check my mail and I have to clear the cookies in order to log in again If i fail, This is d info I get The webpage at https://mail.google.com/mail/?shva=1&ui=html&zy=l&pli=1&auth=DQAAALgAAABhdI_K9uptgb6yQfGVmnl74VZEUH7U2M7WGJn3kJnCiY0CNI5QBU3X-g6UjPENGoHKSHE9nRna_Ygu_d59mN-HG1SUzNpI_UEMJ9CwDqZAYxYLEJl8r_JA2qJNGF8H0fdKfn99Gb2YeI-lprGxCrWRT7LicyADxQvNLQ6l9xBvOccEBSJfdIrna8dOXeX06N41L0zpnLQrVG1qdulR7LxId9XwtVb6QtfhwnambqLoNiY402Y5pjGG1_gFL4dNpJA&gausr=hariss89%40gmail.com has resulted in too many redirects. Clearing your cookies for this site or allowing third-party cookies may fix the problem. If not, it is possibly a server configuration issue and not a problem with your computer. Here are some suggestions: Reload this web page later. Learn more about this problem. Wat can I do ...

    Read the article

  • Understanding Unix Permissions (w/ ACL)

    - by Dr. DOT
    I am trying to set permissions on my server properly. Currently I have a number of directories and files chmod'd at 0777 -- but I am not comfortable with it being this way. So at the advice of a serverfault specialist, I had my hosting provider install ACL on my shared virtual server. When I FTP to the server as my FTP user account "abc", I can do everything I need to do (and rightfully so) because all my dirs and files are owned by "abc", the group is "abc", and the 1st octet is set to 7 (rwx). That much I get. But here's where it gets dark gray for me. PHP is set to user "nobody". so when someone browses on of my web pages that either ends in .php or has some embedded PHP, I assume the last octet controls the access. Because all my dirs and files are owned by "abc" and assigned to group "abc", if the last octet was a 4 (r--) then the server would let the browser read the file. If it were a 6 (rw-) then the server would let the browser also write to the file or directory, correct? what if the web document does not end in .php or does not have any PHP embedded? What is the user then? how can I use ACL to not set the permission to 6 (rw-) or even 7 (rwx)? [not sure what execute does or means] Just looking for some sort of policy settings to best lock down my dirs and files while allowing my PHP scripts to do uploads and write to files (so my users don't call me to tell me "permission denied". Ok, thanks to anyone out there willing to lend me a hand. It is greatly appreciated.

    Read the article

  • Allowing Sharepoint to relay email through Exchange

    - by dunxd
    I have written a Sharepoint 2007 web part that sends a field from a form to a specified email address. I have got the form working as I require, but at present it can only send to internal email addresses. Sharepoint's email functions use SMTP to send to our Exchange 2003 server, but because our Exchange server is configured to prevent relaying, if the To: address is not at a local domain, it won't deliver the mail. I don't want to open up our Exchange server to be a completely open relay. What I want is to allow my Sharepoint servers to send mail to addresses outside our domain. The following seem possible: Allow all mail sent from one of the Sharepoint servers to be relayed Allow all mail from a web application pool account to be relayed (I am not sure that the application pool authenticates to the SMTP server though) A combination of the two Can anyone advise on the best way of doing this? Is setting up a dedicated SMTP server on the Exchange server (not a separate physical server) the right way of going about this? EDIT: Note this is for Exchange 2003. There is a post on setting this up in Exchange 2007 which appears to have recognised the frequent requirement to do what I need. It doesn't give much detail on 2003 though. Can anyone expand?

    Read the article

  • MySQL stops accepting connections over 3306, still working on localhost

    - by Ben Dilts
    I have a MySQL database that stopped accepting connections from my web server altogether. So I SSH'ed into the server and started checking its vitals. The hard disks had plenty of open space, and there was plenty of available memory and swap space. Nothing was eating up the CPU (close to 100% idle). I even connected to MySQL locally and ran a few queries without any issues. But SHOW PROCESSLIST only showed my own connection, no others. Worst of all, in the MySQL log, no errors even remotely coincided with the unavailability of the server. On the web server, I got an error saying "Lost connection to MySQL server during query" at the moment the unavailability started, followed by a bunch of "MySQL server has gone away" errors. There's only one other application on the server that accepts network connections, and I killed that one (in case it was holding too many open connections or something), but it didn't help. Finally I just restarted the MySQL process, and everything is (for now) working again. What else should I check in these circumstances? Any idea what the problem might be? And how might I verify that is in fact the problem?

    Read the article

  • Chrome Lockups Windows 7 64-bit

    - by Mike Chess
    I'm running Google Chrome (6.0.427.0 dev) on Windows 7 Home Premium 64-bit (AMD Phenom 3.00 GHz, 8 GB RAM). The computer lockups hard after running Chrome for about five minutes. The lockup happens whether Chrome is actually being used to browse web sites or it is just idling. No programs can be started or interacted with when this happens. The computer must be power-cycled to recover. The lockup happens regardless of which web sites are being browsed. The system event logs do not show any events around the time when the lockup transpired. All other applications run just fine on this system. In fact, Chrome ran without issue for several months on this system (the system was brand new 03-2010). I also run the same version of Chrome on other computers (Windows XP SP3) without issue. I've come to really like Chrome and use it as my default browser whenever possible. What could be causing Chrome to cause the system to lockup as it does? Does Chrome have any logs that aren't part of the Windows event log? Does Chrome have a debug command line switch that might reveal more about what happens?

    Read the article

  • Tell if IIS is being asked to serve compressed pages?

    - by Graham
    Hi, I'm trying to find out if our IIS server is being asked to serve pages compressed. I'm a noob regarding a lot of this so am working my way through the issues. We're using IIS 6.0 and have correctly turned compression on. If I use Fiddler2 to analyse the HTTP requests via localhost, then Fiddler reports that the pages are compressed. If we then access the server over the network, either via its external URL or via the internal server name, Fiddler reports those pages as uncompressed. Therefore, it's logical to assume that something is getting in the way - presumably our ISA server. Our ISA administrator states that ISA is configured to allow compressed requests but what I want to do is to look at the requests coming through to IIS to see if IIS is being asked to serve pages compressed. I'm fairly convinced that our request is going to ISA, ISA is forwarding these, but not with the "compression" details - therefore IIS is not performing any compression. I've looked at the IIS logs but can't see anything obvious about the HTTP request. Is there any way I can check, on the web server itself, this sort of information? One thing that is confusing, but it may be normal, is that the Client IP making the request is not the orignal PC (i.e. mine) and not the ISA firewall, but the web server itself... Thanks

    Read the article

  • IIS 6 Denies access to the default document

    - by Jim
    I've got Windows Server 2k3 with IIS6 hosting a couple ASP.NET MVC 2 applications (.NET 4), all in the Default Web Site. Most of them simply use Integrated authentication, but a couple use forms as well. All the applications work properly and are correctly accessible. The problem I'm trying to resolve is access to the default document. It is currently specified as index.htm. Both index.htm and the Default Web Site are configured to allow anonymous access (with none of the authenticated acces boxes checked). However, access is denied to the file. Accessing via server.domain.tld/ and server.domain.tld/index.htm both yield 401 errors. However, server.domain.tld/default.htm (file does not exist) properly returns a 404. If I alter the file security on index.htm to allow integrated authentication, then requesting /index.htm directly works properly for users with domain accounts, but anonymous users get a login prompt/401. How can I configure IIS to allow all users to view index.htm via server.domain.tld/?

    Read the article

  • debian lenny email server

    - by Dal
    Hi I am a newbie and set up a debian lenny at home and set up the web and email server in the default installation. I followed the instructions for Exim and ran dpkg-reconfigure exim4-config and set it up for mydomainhere.com. I created a one line message file and attempted to test exim by running the command exim [email protected] < msgfile. I also tried using exim4 Exim but i get same error -bash: Exim: command not found. Obviously I am ignorant on how to run exim and test. I also tried to run a php file that sends a test mail and had no success. That script is tested and works fine if I send it from my hosting isp on a different domain. So I know the php script is good. I set up the debian system behind a netgear firewall and uses 192.168.1.x ip . The web server works great and users can visit my site. But I am lack the knowledge on how to get the email working. Appreciate is someone can guide me.

    Read the article

  • IPTables configuration help

    - by Sam
    I'm after some help with setting up IPTables. Mostly the configuration is working, but regardless of what I try I cannot allow localhost to access the local Apache only (i.e. localhost to access localhost:80 only). Here is my script: !/bin/bash Allow root to access external web and ftp iptables -t filter -A OUTPUT -p tcp --dport 21 --match owner --uid-owner 0 -j ACCEPT iptables -t filter -A OUTPUT -p tcp --dport 80 --match owner --uid-owner 0 -j ACCEPT Allow DNS queries iptables -A OUTPUT -p udp --dport 53 -j ACCEPT iptables -A OUTPUT -p tcp --dport 53 -j ACCEPT Allow in and outbound SSH to/from any server iptables -A INPUT -p tcp -s 0/0 --dport 22 -j ACCEPT iptables -A OUTPUT -p tcp -d 0/0 --sport 22 -j ACCEPT Accept ICMP requests iptables -A INPUT -p icmp -s 0/0 -j ACCEPT iptables -A OUTPUT -p icmp -d 0/0 -j ACCEPT Accept connections from any local machines but disallow localhost access to networked machines iptables -A INPUT -s 10.0.1.0/24 -j ACCEPT iptables -A OUTPUT -d 10.0.1.0/24 -j DROP Drop ALL other traffic iptables -A OUTPUT -p tcp -d 0/0 -j DROP iptables -A OUTPUT -p udp -d 0/0 -j DROP Now I have tried many permutations and I'm obviously missing everything. I place them above the in/out bound SSH to/from, so it's not the precedence order. If someone could give me the heads up on allowing only the local machine to access the local web server, that'd be great. Cheers guys.

    Read the article

  • How to setup bindings for development IIS 7.5 with lot of sites

    - by Antonio Bakula
    I am a programmer in a small ASP.NET shop with very little expirience in server administration, and I have to setup IIS 7.5 to host lot of sites on newly installed windows server 2008 R2, these sites are test "clones" for sites on "real" web server and they should be accessible only in local network (domain). Developers should add new sites for our new customers. Project managers use this server to check progress and test new sites and new features, QA people have to have access to this site and test before we copy it to the "real" web server. Developers only have access to IIS console, in fact they can use RDP to test server with their developer domain credentials and permissions, also developers are local admins on that machine (tester). On our previous server I used different port numbers for each site. That worked but don't like this solution, I would prefer to use subdomains. But here are the problems: manually adding DNS records is not an option because we do not wont that developers have to administer domain DNS server, and currently this had to be done with domain administrator credentials. Is there a some way to add DNS record automatically ? I tried to add DNS record for subdomains on test server with wildcard (*.tester) and that seems to work for some time but that change coused some bad problems in our domain network and admin forbid me to mess with DNS, he said that I have to add DNS record for every subdomain manually and that I can not use wildcards, and there is nothing that I can do about it, mainly for "politicall" reasons :( obviously our admin is pretty much uncooperative, outsorced from different organization and I can't do anything about that. can I add another DNS server on that machine ? What must be setup on clients machines to "tell" them to use domain DNS server and tester domain server ? So please I need someone to give me some advice, what should I do ? Is different port numbers only option left ? Thanks !

    Read the article

  • Command line switching

    - by Larry
    I have read through some suggestions but I am just not technical enough to get this I think. I am a CAD designer and each file has 5 files associated with it. I have 3 sets of 5 files, and each set needs to go into its own zip file, placed on a separate server. For example: "C:\Program Files\7-zip\7z.exe" a file1.zip "O:\server2\map files\BC\BC.d*"-0 "C:\Program Files\7-zip\7z.exe" a file2.zip "O:\server2\map files\BC\ON.d*"-0 "C:\Program Files\7-zip\7z.exe" a file3.zip "O:\server2\map files\BC\AB.d*"-0 and I am in directory "S:\server\map files\provinces" (for example). These lines run within an existing batch file and by the time it reaches the 3 lines above, it's in the S: directory sample above. So it's looking on my pc for the 7-zip program, creating 3 zip file names which it does, but places those zip files on a separate server which it doesn't and the first zip file also includes all the other 10 files, the second zip file the same plus the first zip file, and the third the same with the other two zip files making me think the code isn't recognizing the part after file1.zip where I am trying to tell it what files to include and where to place the zip files. Ultimately, I want to either have the system create a new zip file if the old one was deleted, or copy the new files into the existing zip and overwrite any older files, and for these zip files to be placed in a separate location which is where we share our files with other personnel from within our company. The S: drive is for all originals, and O: is for sharing. Is there a list of all switching options with many different samples?

    Read the article

  • Apache and Virtual Hosts Problem on OS X

    - by Charles Chadwick
    I recently formatted and installed my iMac. I am running 10.6.5. Prior to this format, I had the default Apache web server up and running with several virtual hosts, and everything ran beautifully. After formatting, I set everything back up again, and now Apache is acting funny. Here is a description of what I have going on. My default root directory for the Apache Web server is pointed to an external hard drive. In my httpd.conf, here is what I have: DocumentRoot "/Storage/Sites" Then a few lines beneath that: <Directory /> Options FollowSymLinks AllowOverride All Order deny,allow Allow from all </Directory> And then beneath that: <Directory "/Storage/Sites"> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny Allow from All </Directory> At the end of this file, I have commented out the user dir include conf file: Include /private/etc/apache2/extra/httpd-userdir.conf And uncommented the virtual hosts conf file: Include /private/etc/apache2/extra/httpd-vhosts.conf Moving on, I have the following entry in my vhosts file: <VirtualHost *:80> DocumentRoot "/Storage/Sites/mysite" ServerName mysite.dev </VirtualHost> I also have a host record in my /etc/hosts file that points mysite.dev to 127.0.0.1 (I also tried using my router IP, 192.168.1.2). The problem I am coming across is, despite having PHP files in /Storage/Sites/mysite, the server is still looking at /Storage/Sites. I know this because in the DocumentRoot contains a php file with phpinfo() (whereas the index.php file in mysite has different code). I have tried setting up other virtual hosts, but they are still doing the same thing. Also, "NameVirtualHost *:80" is in my vhosts file. I saw as a solution on another thread here. Doesn't seem to make a difference. Any ideas on this? Let me know if this is not enough information.

    Read the article

  • Cannot ssh into server

    - by revolver
    I am trying to SSH into a linux machine running ubuntu, but the interactive shell stuck somewhere and I can't key in anything. I am on Mac OS X Lion. This only happens when I am trying to access via an external IP. Local LAN SSH is working perfectly. macbook:~ user$ ssh -v -v user@serverip // i skipped the rest of the log, but I can paste it here again if needed. Authenticated to serverip debug1: channel 0: new [client-session] debug2: channel 0: send open debug1: Requesting [email protected] debug1: Entering interactive session. debug2: callback start debug2: client_session2_setup: id 0 debug2: channel 0: request pty-req confirm 1 debug1: Sending environment. debug1: Sending env LC_CTYPE = UTF-8 debug2: channel 0: request env confirm 0 debug2: channel 0: request shell confirm 1 debug2: fd 3 setting TCP_NODELAY debug2: callback done debug2: channel 0: open confirm rwindow 0 rmax 32768 My terminal shell just hang after this, and I can't key in anything. I checked var/log/auth on the server and saw that the a session is being created and I had already logged in, but I don't see any responses on my client machine. I googled around and a lot of the solution had to do with the Broadcom wireless driver, but I am not even using one, so I am pretty clueless here. To give you more information, the linux machine is also running a web server, and I have no problem accessing the web server. Thanks. Any help is appreciated.

    Read the article

  • Nginx and Wordpress side-by-side with static directory alias?

    - by user117161
    I'm a Nginx novice, but I have it set up with Wordpress Multisite (subdirectories) and php-fpm, and it's working great as is. This lets me set up Wordpress sites off the web root: domain.com/site1 - a Wordpress network single site, which renders as expected. domain.com/site2 - ditto etc. Concurrently, I can easily create static files in the web root that don't conflict or interact with Wordpress, and they are also rendered normally. domain.com/hello.html - rendered normally domain.com/hello.php - rendered normally, including php processing domain.com/static/hello.php - rendered normally (along as "static" isn't a WP single site name) What I'd like to do, and this is where I'm out of my depth with nginx.conf, is create a root directory domain.com/static and put static sites in there domain.com/static/site3 domain.com/static/site4 and have Nginx check the request that comes into the root request comes in for: domain.com/site3 and before handing off to Wordpress, check to see if it exists in the /static folder checks: domain.com/static/site3 - static content exists there and if so, serves that content while maintaining the root URI. serves: domain.com/site3 (with content from domain.com/static/site3) if not, it lets Wordpress check if /site3 is a Wordpress single network site as it does now, and the process continues normally. In nginx.conf, in the server section, I start with this try_files rule: location / { try_files $uri $uri/ /index.php?q=$uri&$args; } I then include a bunch of Wordpress specific rules as identified at http://codex.wordpress.org/Nginx under the subdirectory section. I can see that rewrite rules might take care of it easily, but in my experimentation I've only achieved a bunch of looping (/static/static/static, etc.) and managed to bypass Wordpress if the looping stopped. Sorry if this is a very long-winded way of asking a simple question, but I'm definitely learning some of this stuff for the first time. Thanks!

    Read the article

  • Losing SQL connections

    - by john pavelka
    sql servr 2005 - Standard; one dedicated sql server (VM); windows server 2003; Small databases; About once a week we lose all sql connections. It seems to fix itself after about 5-10 minutes. System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. --- System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. We don't have a fully qualified DBA; it's kind of a joint effort here. Can somebody give me some general ideas for troubleshooting the network side and the application side? We already ran a few tuning profiles and ran through Database Tuning Advisor to apply indexing recommendations. It would sure be nice if there was a way to take a snapshot of what was running on sql server when these 100% cpu spikes occured, but sometimes we're not around. Is it common to throttle CPU for certain processes? Can this be done with Windows server 2003? For example, if security apps were making cpu spike to 100%, is there a way to limit their cpu usage? Any advice is appreciated. thanks,

    Read the article

< Previous Page | 951 952 953 954 955 956 957 958 959 960 961 962  | Next Page >