Search Results

Search found 5881 results on 236 pages for 'junction points'.

Page 173/236 | < Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >

  • Network bandwidth usage dashboard?

    - by SkippyFlipjack
    I have a couple of wifi access points hooked up to my home network, one of which I keep unsecured for some development I do; there are only a couple other homes within range and they've got their own wifi so it's not a big concern. I also have a Sonos system, Tivo, Roku, a couple laptops, a couple phones, an iPad and a desktop machine, all of which are internet-smart. So when my internet bandwidth tanks and it takes five minutes to load a YouTube video, I want to know what's going on, and there are many potential culprits. I'd like to be able to plug my MacBook into the primary router and see a nice little dashboard of the units on the network and what kind of bandwidth each is using at that moment. I could figure this out from WireShark or tcpdump but figure there has to be an easier way. I've tried a few different commercial products but none really presented the right info. Suggestions? (This may be a question for superuser since my Apple Time Capsule's SNMP capabilities are limited, but I figure admins of small business networks would have dealt w/ the same issue..)

    Read the article

  • New to server admin, Diagnosing Memory and CPU issues on DV

    - by G Thompson
    Sorry for my ignorance and lack of knowledge. I'm a PHP/Front-end developer just now venturing into very minor server management/diagnostics. I have a Media Temple DV account. I have 2 sites that run a PHP script through a subscription service to an API. Basically API hits site with said script. Script runs, gathers data from api, saves data to SQL database. I noticed that these sites seemed to causing memory overages on my server (not sure why). So I temporarily disabled them. The memory overage alerts stopped but my CPU still sits really high, like at 115% and above. I'm trying to diagnos this with tutorials and resources but just can't seem to find a solution. I'll attach screenshots(screenshots are without the PHP scripts I assume are responsible for the memory issues) I'm assuming are important to the diagnosis, but if anyone can point me in the right direction to start A. figuring out if and why the PHP script may be causing memory overages and B. Why my CPU is always over 100%. Thanks guys! Links to screen shots... can't post with low points. http://i.stack.imgur.com/A64k4.png http://i.stack.imgur.com/qm1rV.png

    Read the article

  • Apache and Virtual Hosts Problem on OS X

    - by Charles Chadwick
    I recently formatted and installed my iMac. I am running 10.6.5. Prior to this format, I had the default Apache web server up and running with several virtual hosts, and everything ran beautifully. After formatting, I set everything back up again, and now Apache is acting funny. Here is a description of what I have going on. My default root directory for the Apache Web server is pointed to an external hard drive. In my httpd.conf, here is what I have: DocumentRoot "/Storage/Sites" Then a few lines beneath that: <Directory /> Options FollowSymLinks AllowOverride All Order deny,allow Allow from all </Directory> And then beneath that: <Directory "/Storage/Sites"> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny Allow from All </Directory> At the end of this file, I have commented out the user dir include conf file: Include /private/etc/apache2/extra/httpd-userdir.conf And uncommented the virtual hosts conf file: Include /private/etc/apache2/extra/httpd-vhosts.conf Moving on, I have the following entry in my vhosts file: <VirtualHost *:80> DocumentRoot "/Storage/Sites/mysite" ServerName mysite.dev </VirtualHost> I also have a host record in my /etc/hosts file that points mysite.dev to 127.0.0.1 (I also tried using my router IP, 192.168.1.2). The problem I am coming across is, despite having PHP files in /Storage/Sites/mysite, the server is still looking at /Storage/Sites. I know this because in the DocumentRoot contains a php file with phpinfo() (whereas the index.php file in mysite has different code). I have tried setting up other virtual hosts, but they are still doing the same thing. Also, "NameVirtualHost *:80" is in my vhosts file. I saw as a solution on another thread here. Doesn't seem to make a difference. Any ideas on this? Let me know if this is not enough information.

    Read the article

  • Difference between all servers in one cluster and more than one cluster with servers?

    - by silla
    Not sure I understand what´s the difference or how it works when servers a running in one cluster or if there are more than one clusters with servers in it - regard High availability & Load Balancing. For me they are somehow the same, there is not really a big difference. Let´s make a simple example: 2 Servers in 1 Cluster 2 Clusters with each 1 Server - 1. If one Server failure, the other one is able to continue the work. The same for Load Balancing, these two Servers are able to balance the work together. - 2. The same thing! If one Server failure... The only thing that could be a problem with point 1. is if the Cluster fails (then both of the Server are dead). But is this even possible? I was reading stuff about clustering and high availability but I think I do not get this really. Probably I did not really understand how a cluster is working. Are these 2 points with 1 Cluster and 2 Clusters somehow the same or are there really some big differences? What should I know about it? Thank you

    Read the article

  • Stop Windows 7 from accessing or writing to hard drive unless "told" to by me? (More info inside...)

    - by Jeff
    A confusing question, perhaps, but bear with me. I have two internal HDDs set up in a RAID0 array which I use as mass storage. I access the drive very infrequently (once a day at most) and so I have set up Windows 7's power options to turn off idle disks after only 1 minute. This is fine, and the disks are turned off most of the time. However, I notice that Windows sometimes spins up the drives when I really, really don't want or need it to. This causes a 30 second delay as both drives spin up and lock up my system. Some examples of when this happens: 1) When I'm installing something using Windows Installer or Installshield; it seems to me as if they're using the drive with most available free space as the installer cache location... so my big RAID drive has to spin up! Most annoying. 2) Apparently, when I open a Java-based program which resides on my system drive and has nothing to do with my RAID drive! 3) At boot-up and shut-down time. At shutdown the drive spin up only for the computer to immediately shut down! Incredibly frustrating! I've already tried changing the letter of the drive, and at some points have removed the drive letter entirely, which solves the first two issues above. So my question (FINALLY!) is this: is there any way I can mark this drive as being for "storage only", so Windows basically does not see it at all until I actually invoke it somehow? Or is there any way I could set it up so that only specific programs have write access to it? For example, download managers, TeraCopy, etc. etc.? Basically I want it to be a "ghost drive" until I'm ready to use it and to stop Windows from spinning it up all the damn time! Thank you. :)

    Read the article

  • How do I find the Serial Number of a USB Drive?

    - by jamuraa
    I'm trying to re-enable USB Autoplay in a secure way, by installing a program on each of the computers that I use so that I can run my launcher (PStart in this case) whenever I plug in my specific USB drive. The tool that I'm using to enable this - AutoRunGuard - needs the serial number of the USB drive that I am using. I can't figure out where to find this in Windows. Ideally I would not need to install and run a separate program to do this (seemingly) simple task. Since this is a pretty easy question, bonus points if you also tell me how to discover it in Linux as well. What steps do I need to take to retrieve a USB Drive's serial number? UPDATE: Just incase people come here looking for the answer for AutoRunGuard, I discovered that they don't want the USB device serial number, but the volume serial number. The drive serial can be found by going into the command line, navigating to the drive, and executing dir. The volume serial number is found in the top two lines - use it without the dash.

    Read the article

  • My nameserver isn't registered?

    - by jflory7
    My problem is that I am trying to set my domain's nameservers to the nameservers of my dedicated server. My domain is hosted by Namecheap, and everytime I try to input the two nameservers for my private server, one of them is rejected for being unregistered. My dedicated server's control panel is managed through Parallels Plesk 11.5, and the nameservers provided to me are one from the actual provider, OVH (sdns1.ovh.ca), and the other is an actual unique nameserver that points directly to my specific dedicated server. Previously, for another domain I own, I was successfully able to get Namecheap to take the nameserver without an error, so I know this is possible. I know that it works for one of my other domains. After being redirected by Namecheap to contact the server provider, I called OVH and they said it was something I would have to do myself. One interesting detail the OVH representative mentioned was that he saw that my port 53 was closed, which is the port that handles DNS. The only problem is that I have no idea or knowledge as how to open this port back up. So, my final question is how can I get this nameserver working in Namecheap to point to my dedicated server? If you need any more details, feel free to ask for clarification.

    Read the article

  • Google Apps Domain Level Shared Contacts?

    - by dkirk
    My firm just switched to Google Apps Premiere addition 2 weeks ago and aside from the way Google handles shared contacts, things are going quite well. Previously, on our Exchange server we had numerous shared contact lists set up in the shared folders. We had a separate list for vendors, sales agents, etc.. Is there not a way to set up lists or groups such as this on the domain level in Google Apps? I have found a ton of forums with users asking the same question but no good answers unless you purchase some third party app in the marketplace. I have toyed around with the "google-shared-contacts-client" here: http://code.google.com/p/google-shared-contacts-client/ and this almost does it but it falls short when trying to group contacts on the domain level or when trying to search for a contact by company name. Are either of these things possible? I am now looking to create a Google Doc spreadsheet to share with the domain just to have a separated defined list of contacts that is search-able by various fields... Anyone who could shed some light on domain level contact sharing relating to the points above, I would be most grateful...

    Read the article

  • Hostname vs webpage domain.

    - by Mark
    Hi All, Im just starting to look at deploying a webpage and get into the joy of DNS etc. And im wondering how you set up multiple web-servers all with thier own hostnames/public IP addresses, and yet have them serve up a webpage from one domain. For example, lets say you have a website example.com, and an A record in DNS that points at it's IP address of 1.2.3.4 . You want to have two servers, prod1 and prod2 with some kind of load balancer in front of them for fail over reasons. The way I see it you would want to have the hostnames of these servers as prod1.example.com and prod2.example.com and perhaps loadb.example.com. How would you set up the DNS so this would all work. ie you could ssh to any of the server domains, prod1.example.com, prod2.example.com or loadb.example.com and also just use the www.example.com url to go to the website. And would all these server names be resolvable from the public internet and is that safe? This would be a linux environment, for arguments sake ubuntu, a django framework dynamic website, running in apache 2.2 Cheers Mark

    Read the article

  • Website always having DNS problems

    - by Root
    I moved my website from shared hosting to VPS. When it was in shared hosting all I did is updated my name servers whereas now I got my own VPS server and I used one of my domain sjdpublishing.com as the primary domain for my VPS. I created nameservers as ns1.sjdpublishing.com and ns2.sjdpublishing.com and then my actual website is creativeproperty.com.au which are pointing to ns1.sjdpublishing.com and ns2.sjdpublishing.com I am having repeated problems with my domain creativeproperty.com.au a few weeks back I had a problem which was resolved by flushing DNS and later I got similar problem which was not resolved by flushing DNS, I posted a question here and someone answered me to go to Network Settings in my MAC OSX and remove the IP as in my MAC terminal nslookup creativeproperty.com.au points to my router IP and I fixed this problem Now many of my clients were complaining that they are having same troubles accessing my website. I don't know whether its to flush DNS or change network settings or other issues. Can anyone please check my domain creativeproperty.com.au and sjdpublishing.com are having correct records or not and also can anyone tell me the best solution for this issue?

    Read the article

  • Removing HttpModule for specific path in ASP.NET / IIS 7 application?

    - by soccerdad
    Most succinctly, my question is whether an ASP.NET 4.0 app running under IIS 7 integrated mode should be able to honor this portion of my Web.config file: <location path="auth/windows"> <system.webServer> <modules> <remove name="FormsAuthentication"/> </modules> </system.webServer> </location> I'm experimenting with mixed mode authentication (Windows and Forms). Using IIS Manager, I've disabled Anonymous authentication to auth/windows/winauth.aspx, which is within the location path above. I have Failed Request Tracing set up to trace various HTTP status codes, including 302s. When I request the winauth.aspx page, a 302 HTTP status code is returned. If I look at the request trace, I can see that a 401 (unauthorized) was originally generated by the AnonymousAuthenticationModule. However, the FormsAuthenticationModule converts that to a 302, which is what the browser sees. So it seems as though my attempt to remove that module from the pipeline for pages in that path isn't working. But I'm not seeing any complaints anywhere (event viewer, yellow pages of death, etc.) that would indicate it's an invalid configuration. I want the 401 returned to the browser, which presumably would include an appropriate WWW-Authenticate header. A few other points: a) I do have <authentication mode="Forms"> in my Web.config, and that is what the 302 redirects to; b) I got the "name" of the module I'm trying to remove from the inetserv\config\applicationHost.config file; c) I have this element in my Web.config file: <modules runAllManagedModulesForAllRequests="false">; d) I tried a <location> element for the path in which I set the authentication mode to "None", but that gave a yellow exception page that the property can't be set below the application level. Anyone had any luck removing modules in this fashion?

    Read the article

  • Driver corruption when deploying Dell Touchpad Drivers (with software) during imaging process

    - by BigHomie
    We're an sccm shop, and use it to deploy Windows. When deploying Dell laptops (multiple models), the touchpad drivers seem install properly, but the software doesn't. The resulting problem is that when the touchpad is pressed on occasion, the mouse pointer will 'jump' to certain points on the screen. A possible symptom of this problem/visible sign is if the touchpad icon isn't in the system tray. The software is in the control panel, but when opened part of the gui is pixelated, indicating botched install maybe? The manual resolution to this, is to go into device manager and uninstall the driver with the option to uninstall all driver software. After a restart, the driver and software is apparently reinstalled, and from there works as expected. Obviously this partially defeats the purpose of a zero touch deployment. If anyone knows why this is and/or a possible workaround, those answers would be valid as well. Barring that, I want to find a way to deploy the driver and touchpad software in an unattended way, so that it can be conditionally installing during the imaging process. To be honest I'm not sure how to troubleshoot this, I suppose I could try drvinst.exe to install the driver, but finding out why this fails initially would keep me from spinning my wheels.

    Read the article

  • Giving the root user priority to maintain Debian (while server collapsing under heavy load)

    - by Saix
    Is there any way to setup Debian to prioritize any or specific root's activity before every other? For instance, several times per year something gets wrong (usually man's fault by overstressing apache/mysql) and system gets unresponsive under heavy load like 200 (8-core cpu). I know there are limits for php scripts to run then kill, but that's not the way because this limit has to be at least 45 minutes long. The problem is, until I'm able to login via SSH and let apache/mysql restart under this server stress, it nearly hits these 45 minutes anyway. Also hardware restart causing usually to run fsck at boot time on all harddrives since it's usually pretty long the box haven't been restarted. I was told it's really not good idea disabling fsck but then again, it takes more then hour to complete. What is the fastest way to restart apache/mysql? Is there any way to give ssh users or root user higher priority so the logging in and completing these restarts (rather stops though) commands wouldn't take so long? One comes to my mind.. use NICE for apache/mysql but no way. I can't risk limiting those two vital apps 24/7 or could I? I'm a little bit scared if any other system process wouldn't slow the pages down too much. Any backup process, swap (if any) etc. There is pretty heavy PHP framework with 20k visits a day, so it needs every hw/sw resource available. I can't throttle it the whole time, just in certain points when system gets unresponsive, so I could maintain it.

    Read the article

  • Windows 7, Black screen with cursor, impossible to logon

    - by PJC
    First - I have gone through as many possible solutions I've found here and elsewhere. [Edit - to describe the issue in more detail, the PC appears to boot correctly, but instead of the logon screen, I have been getting a black screen WITH the pointer cursor, and it responds correctly to the mouse. Pressing CTL-ALT-DEL brings up the logon screen's background, but with no logon area nor any other content. This screen was at the full resolution before I uninstalled the graphics driver in safe-mode.] I also just ran a full up-to-date AVG scan from boot media. [Edit - the AVG scan, which was updated to today's virus signatures, found no issues at all.] So - steps I've tried: Safe Boot, Restore from before the issue - done, no help. Uninstall the graphics driver - done, now i have a 1024x768 fallback screen, still no way in. sfc /scannow - only doable from Safe boot obviously, but no change. [Edit - booting from restore media, performing a startup repair and...]Restore further back - restored to 2 days ago, and I'd had many reboots since then with no problem. Enable autologin to try to get beyond the login screen - done, doesn't work. It seems the best advice is complete reinstall, but I really don't want to do that because it'll take 3-4 days to add all the apps I use. Some key points to note - in both states - before and after removing the video driver, I always had a mouse cursor on the screen. CTL-ALT-DEL flashes up the login background, but no login info. I can (and often do) reinstall from scratch, but was at a fairly stable state before this, and would prefer not to. -Paul

    Read the article

  • How should an experienced Windows SysAdmin learn Linux? [closed]

    - by Systemspoet
    I have a new hire starting in a few weeks who is an experienced Windows SysAdmin. I think he's fairly senior on the Windows side, with a pretty deep AD understanding and experience with Exchange 2007, 2010, and exchange migrations. He's done a little PowerShell but I suspect more of the "run this command to do this" variety then "write a script to do this" sort. However, we are a mixed shop and (he knows this) I expect him to become a reasonably competent Linux SysAdmin over time. I'm looking for good starting points to bring him along. I have over ten years of Linux/UNIX experience, so it all sort of seems intuitive to me, but I've been thinking about the toolkit you actually need to be productive in the Linux CLI world. Just to be able to use the machines at all, off the top of my head... vi Basic CLI stuff -- move around, rename files, copy files, tar, gzip, changing passwords, finding relevant manpages, keep track of where you are, find things in your history, etc, etc. More advanced things that I take for granted but are actually pretty hard -- doing things with 'find', extracting relevant text via 'awk' and/or 'cut', knowing when to use 'grep' and when to use 'grep -e' or 'egrep'. Distribution specific stuff... compiling software, rpm, yum, apt-get, you name it. This all seems pretty basic to me, but when I think back to 1995 when I was first learning my way, some of those things took me years to master. So my question is -- where should I send him to pick up those skills? I'm not just thinking of classes, but rather also websites and books? Where do you all suggest as a starting point for picking up Linux skills?

    Read the article

  • What should I use to ping multiple IPs and get notified of time outs?

    - by HumanVirus
    I've been using MultiPing to ping hundreds of IPs (from access points and such) and check their performance (packet loss, latency) and uptime. The program is very easy to use, but I was wondering if someone could recommend me something that would work better and that would also work in Linux. The features I'm looking for are: Notification Types: At least desktop notifications and SMS, but it would be great if it also had e-mail, IM, or other types of notifications. (MultiPing has some of these, but they don't work too well.) Being notified about the root problem only: Since some devices are dependent on others, I'd like to be notified only about the root problem. E.g. Let's say I have A[x.x.x.222]B[x.x.x.33C[x.x.x.44]D[x.x.x.55], and B goes down, therefore C and D will also be down. Is it possible to get a notification only about B being down? Light on resources. Ideally multiplatform or at least available for both Linux and Windows. I've heard about Nagios and Shinken being used for monitoring. Would you recommend that I use something of the sort or would that be too much for my needs? If using Nagios, Shinken, or similar software is recommended, can anyone tell me what sites I should go to or what books I should get that would be good for someone who is totally new at this? I'd appreciate any suggestions.

    Read the article

  • 64-bit Windows 7 gets stuck on logo screen on bootup

    - by Richard B
    I've had a PC running Windows 7 in my office which I'm not using at the moment (cause I'm working elsewhere as a consultant atm), I'm only accessing the PC using Team Viewer (http://www.teamviewer.com/) which means the PC has been running for quite some time now. I've restarted it maybe twice a week though. A few days ago I couldn't access it using Team Viewer and when I got to the office the screen was black with only the mouse pointer showing. The PC has four hard disks, three of them (all 1Tb) is using RAID 5. This is what I've done so far: I reboot and everything seems to load correctly. I get to a screen that gives me two choices - boot Windows normally or perform a startup repair. Choosing to boot Windows only gets me to the Windows 7 logo screen which only animates over and over again. Choosing to repair gets me to the repair screen that "checks for problems" and then it gets stuck on the "Attempting repairs..."-screen (I let it run for about 24 hours before giving up). What is the next step to take? I don't have any backups and no system restore points saved. I can access files and folders through a terminal window using a Windows 7 DVD so I guess nothing is lost yet... Please help me, thanks!

    Read the article

  • Host to set up postfix to use external smtp server

    - by Leo
    I have a web server and a mail server. Both have the same domain name except, one points to mywebsite.com and the other is mail.mywebsite.com. They have different IPs. I'm trying to set up postfix on my web server so it uses my mail server as the server that sends e-mails. I followed this guide: http://www.howtoforge.com/postfix_relaying_through_another_mailserver I am getting this error in my logs: Oct 28 02:56:45 mywebsite postfix/smtp[1660]: warning: host mail.mywebsite.com[xxx.xxx.xx.xx]:25 greeted me with my own hostname mywebsite.com Oct 28 02:56:46 mywebsite postfix/smtp[1660]: warning: host mail.mywebsite.com[xxx.xxx.xx.xx]:25 replied to HELO/EHLO with my own hostname mywebsite.com I've searched around and I read that you can't use the same hostname when relaying to a separate smtp server. Is there a work around for this? Do I need to set up my mail server with a separate domain name? Also I have my MX records set up for both mywebsite.com and mail.mywebsite.com. I'm not that experienced with this so if I need to give more info let me know. Thanks!

    Read the article

  • How do I serve Ruby on Rails applications on Windows Server 2008?

    - by Adam Lassek
    I have spent the last several hours attempting to get Ruby on Rails running on a Windows server with no luck. At first I tried configuring a test application through IIS7's FastCGI support, but the documentation for this is not very good. I've been following this blog entry, and this one, and this one, and this one but everything seems to be missing major steps, or are out of date. And every article keeps linking back to this Howto from rubyonrails.org that doesn't exist. The sense that I'm getting is that even if I manage to make this work, IIS' FastCGI isn't good enough to use in a production environment anyway. So it looks like my best bet is to setup a reverse proxy in IIS that points to Apache & Mongrel/Passenger using ARR and UrlRewrite. Is there anybody else out there stuck deploying a Rails application on a Windows stack? Am I on the right track? Can you give me a better idea of how to configure this? I believe Plesk already installed an instance of Apache/Tomcat running on this server using a different port, so adding another virtual host shouldn't be difficult; the hardest part seems to be setting up the reverse proxy through IIS.

    Read the article

  • Running Tor relay on personal server: can this hurt?

    - by rxt
    I would like to install TOR as relay on a hosted personal server. I have loads of bandwidth that I don't use. It's not an exit point. Can this hurt my server somehow? Possible problems I'm thinking of are blacklisting the IP-address, or something similar. I know that exit points get blacklisted on many servers. So if I'm using Tor as a client, I will probably use a blacklisted IP-address for the outside world, so cannot access those sites. However, I'm running this on a server, and as a public relay. Could this hurt the functioning of and access to websites on this server? I could install it as a bridge. I'm a little confused about the difference between bridging and relaying. If I understand correctly the only difference is that a relay is public. Does this mean that bridging only works if I know someone and give them my IP-address?

    Read the article

  • IPtables and Remote Desktop with Proxy

    - by Sebastian
    So I setup a windows 2008 web server R2 on VirtualBox. Currently using Bridged Network. I can remote desktop to the machine hosting the VM (10.0.0.183) but cannot remote desktop to the VM itself (10.0.0.195). The remote port on the VM set to 5003. VM setup to accept remote connections (windows side). We also use a proxy for our internet, and I added these rules under NAT. (centOS 5) on our proxy box. -A INPUT -p tcp --dport 3389 -j ACCEPT -A REROUTING -i ppp0 -p tcp --dport 3389 -j REDIRECT --to-port 5003 -A FORWARD -d 10.0.0.195 --dport 5003 -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT I've been trying for hours and hours and just cannot get it to work. I also used freedns so that we can use a domain name to connect too this VM over the internet. (the DNS points to our external IP address). If we don't get this right we will have to purchase a PPoE from an ISP to connect to this VM remotely, but I know that there is an alternative route if I can just get this port forwarding right!

    Read the article

  • How to setup a virtual host in Ubuntu running on Amazon EC2 instance?

    - by Rade
    I have an app that's accessible via 1.2.3.4/myapp. The app is installed in /var/www/myapp. I've set up a subdomain(apps.mydomain.com) that points to 1.2.3.4. I want the server to point to var/www/myapp if I type apps.mydomain.com/myapp, how do I do that? I have experience creating virtual hosts(lots of them) locally but I'm lost because it's now in production and it's a little different. Here's my virtual host config: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName apps.mydomain.com/myapp DocumentRoot /var/www/myapp/public <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride All Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> Any idea why I still see the files instead of pointing me to the document root? Just in case someone might ask, the app is based on Laravel 4 framework. It's really bad right now because anyone can access the files from the browser.

    Read the article

  • OpenOffice Vs Microsoft Office 2007/2010

    - by Moody Tech
    I have been asked to summarise the pros and cons in connection with the choices between Microsoft Office Vs OpenOffice. I have a broad idea of what needs to be said. However I would like to open a discussion here and have a single place to go to when the time comes to give the summary to management. There are obvious points of contention: For me the lack of compliance with Group Policy is a major concern [Default save location/visibility of C:/Visibility of files and folders on the HDD] However I am sure that functionality and compatibility will be the prime mover. We are looking at making major savings by reducing our commitment to Microsoft licensing. So what are your experiences? What happens when there are no direct equivalents? [Word has a close match in OpenOffice, but a database solution match is not as close, neither is an Outlook [connecting to Exchange Server and downloading all calendars, shared calendars, scheduled events, for Exchange will still exist after the move to OpenSource solutions] In summary then: What do you see as: The benefits of this plan? How do you see the problems being manifest? Discuss.... Many thanks.

    Read the article

  • How can I change the file name of the Program Files (x86) folder?

    - by Madrauk
    Let me stop you before the "you shouldn't do that" and "you will corrupt your file paths". I know what I'm doing, but I'll give you the story to convince you. Basically, my hard drive is failing to the point where programs installed on it are not responding consistently. So, in preparation for my replacement, I'm moving as many files as possible to my secondary because the new drive will be smaller (it's an emergency I can't buy a fancy expensive new drive) and so I can actually use my computer until the new drive comes in. The basic idea behind what I'm doing is I've copied the contents of the Program Files x86 folder to another spot on my other drive, and I want to replace the original folder in the C drive with a symbolic link that points to the other drive, so the programs can run from that drive and be fine but it will save space on my C drive and simplify the moving process when the new one comes in. To do that, I need to rename the program files folder, make the symbolic link, hopefully delete the program files folder, then restart my computer so all the programs are running properly and I can confirm it works. Now that I've told you why, can anyone help me out?

    Read the article

  • Redirect without changing URL

    - by Coobadivin
    Here's the setup. We have a hardware load balancer with an http virtual cluster. Let's call this virtual cluster example1.com. This virtual cluster load balances between two squid reverse proxies which are also on the same physical servers as the web servers. Squid listens on 80 and points to itself as the cache_peer web server which listens on 81. We also have a standalone web server which we will call example2.com. What we are trying to do is create a subdirectory on example1.com called example1.com/example2. This will point to example2.com, but we want our users to stay at example1.com/example2 in their browser. So, it's like a redirect without actually being a redirect. How the hell do I go about doing this? Is this even possible? I'm looking at squid docs in the meantime. example1.com is running a proprietary web server - not Apache :( We can't host example2.com's content in example1.com's file system. These are two very different platforms.

    Read the article

< Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >