Search Results

Search found 33677 results on 1348 pages for 'access levels'.

Page 532/1348 | < Previous Page | 528 529 530 531 532 533 534 535 536 537 538 539  | Next Page >

  • can't get to admin page after factory reset netgear wg602

    - by stefanB
    I have wireless Netgear wg602 on my home network (connected to my internet modem/router). I've had it secured and locked down to only accept connection from specific MAC addresses. I've forgotten the password that I used but my Mac Book laptops can still connect (multiple OS updates - it can't retrieve and display the password but it can use it to log in to WPA) so I want to reconfigure it from scratch (have some new devices). I tried to reset the Netgear wg602 to factory settings (pressed reset button for 10 sec), reset my laptop IP address to local address suggested in manual (192.168.0.210 net mask 255.255.255.0), connect Netgear via ethernet cable to my mac book pro but I can't get to the admin page at 192.168.0.227 as suggested by manual (firefox or safari). At this stage the Netgear is not connected to router, it is only connected to mac book. I can't ping the wireless access point either (but it is on all lights are on). What am I doing incorrectly? Last time I configured it via Windows now I only have Mac Book (which I've used with the wireless access point for 2 years so no compatibility problems).

    Read the article

  • In APC+PHP, how much RAM is too much? Is it okay to set apc.shm_size to many GB?

    - by Jeremy Clarke
    On our server we have a LOT of RAM for our traffic levels (16GB). The HTTP processes regularly eat up all CPU and need to be restarted without even getting close to using swap memory, so I'm looking for ways to spend RAM to ease the load on Apache (and/or help the seperate MySQL server which may be breaking Apache). I have many WordPress installs on the HTTPD instance so APC sometimes uses as much as 900MB of ram (according to the apc.php charts). Just in case I have apc.shm_size set to 1600MB which is more than it needs but not more than I can spare. This means there is usually lots of extra RAM available to APC but also very little turnover and fragmentation is never more than 1%. Is this dangerous? Should I be slimming down APC to less than 1GB just on principle? Should I be expecting some turnover within APC in the name of bringing it's overall footprint down? Having so much memory devoted to APC means that in top/htop every single httpd process shows ~1.9GB in the VIRT memory column. Obviously this is shared memory and not used per-process, but could it be hurting our server? NOTE: The problem with the server remains unclear but the effect is that about 60 times a day all 8 CPU's fill up to 100% and everything stops working until Monit sees that Apache is broken and restarts it (Monin also saves the MySQL server). I'm not sure if APC is even part of the problem but I'm trying to optimize everything just in case.

    Read the article

  • Can Spotlight or Media Browser index metadata contained in iPhoto or Aperture in Mac OS X?

    - by jaydles
    It seems silly to go to all the trouble to assign "Face" data to thousands of photos, but not make it possible to use that data to locate them outside of that application. Is there any way to get Spotlight or Media Browser in OSX (Snow Leopard) to index and recognize metadata (Faces, Places, etc.) contained in iPhoto or Aperture? I know that that metadata is stored in the "library" database for Aperture/iphoto, rather than on the actual files (which is too bad). And I can even potentially see why it might create challenges for spotlight to use it, since spotlight is presumably a file index system, not a media organizer, but surely the media browser used across the other OSX apps is intended to use it? The media browser's whole purpose seems to be to let you easily locate and reference the items you organize in one of the ilife apps (iphoto or Aperture, in this case) from the others (say, imovie, or Mail). It's particularly vexing since the photo app on the iphone sorts by faces by default. Additionally, the mac-based media browser does access smart albums and folders, so you could establish a workaround by creating a smart album for each "face" or place, or tag, and access them that way, but it seems like there must be an easier way. Am I missing something?

    Read the article

  • Google Drive terminates without error on startup

    - by Iszi
    I've used Google Drive for awhile now, but it won't start up after installing on my latest system re-build. I'm still using the same OS, hardware, and basic software load (antivirus, firewall, etc.) that I have for years during which I had not previously had problems with Drive. OS: Windows 7 Ultimate x64 Google Drive Version: 1.12.5329.1887 Now, whenever I try to run Google Drive, it just spawns two instances of the executable which die shortly after. No error messages are posted to the desktop, and nothing indicating any problem is written to the Event Log. After some research, I've yet to find anyone having the same problem who's found an answer. I did find out how to run Google Drive in diagnostic mode, using the --vv parameter at the command line. After that, I opened up the sync log and got this: 2013-10-31 17:11:24,039 INFO pid=3664 1892:MainThread logging:1600 OS: Windows/6.1.7601-SP1 2013-10-31 17:11:24,039 INFO pid=3664 1892:MainThread logging:1600 Google Drive (build 1.12.5329.1887) 2013-10-31 17:11:24,039 DEBUG pid=3664 1892:MainThread logging:1608 DEBUGGING DUMP is ON. 2013-10-31 17:11:24,051 ERROR pid=3664 1892:MainThread logging:1575 ERROR, UNEXPECTED EXCEPTION 2013-10-31 17:11:24,051 ERROR pid=3664 1892:MainThread logging:1575 [Error 5] Access is denied Traceback (most recent call last): File "<string>", line 232, in Main File "<string>", line 118, in RegisterCustomFileTypes File "P:\p\agents\hpal4.eem\recipes\353983091\base\b\drb\googleclient\apps\webdrive_sync\windows\build\pyi.win32\main\outPYZ1.pyz/windows.registry", line 62, in GetValue WindowsError: [Error 5] Access is denied 2013-10-31 17:11:24,052 INFO pid=3664 1892:MainThread logging:1600 Crash reporting disabled. Ignoring report. 2013-10-31 17:11:24,052 INFO pid=3664 1892:MainThread logging:1600 Exiting with error code: 0 I'm running on an account with Administrator-level permissions, and have even tried using "Run As Administrator" on the EXE. I'm not sure why it's looking for a P:\ drive, as no such volume has ever been mounted on this system. What should I do to try to further troubleshoot, and resolve, this issue?

    Read the article

  • openVPN as a way to connect to a LAN by another client, different from server

    - by Einar
    Setup: one LAN handled by a router without a publicly available IP address but without any outbound connection restrictions ("target LAN"); a separate server publicly reachable from the Internet ("gateway"). I am trying to set up openVPN so that a third client can connect to the "gateway" and access the "target LAN". As the router of "target LAN" is not reachable from the Internet directly, it connects to the gateway itself via openVPN as well. The problem is how to handle routing. The LAN router has two network interfaces (for the outside network and the LAN itself). In openVPN (the server on the gateway) I set client-to-client and push "route 192.168.10.0 255.255.255.0" but I assume this would be horribly wrong (it actually messed up the routing on the LAN router until I killed openVPN). openVPN is not using bridging, is configured via tun. Other config details from the server server 10.8.0.0 255.255.255.0 client-config-dir ccd route 192.168.10.0 255.255.255.0 And the client file in ccd is iroute 192.168.10.0 255.255.255.0 What can be adjusted to ensure that a third client can connect through openVPN and access the LAN mentioned earlier?

    Read the article

  • Plesk FTP not working but SFTP and Shell is working

    - by shamittomar
    I am facing a strange problem. The FTP on my Plesk VPS is not working. Whenever I try to connect, FileZilla FTP client says: Status: Resolving address of xxxxxxxxxxxxx.com Status: Connecting to xxx.xxx.xxx.xxx:21... Status: Connection established, waiting for welcome message... Error: Could not connect to server So, it's not even going to the step of asking username/password. So, it's something else. The SFTP on port 22 is working fine. Also, I can successfully do shell access and run commands. But, I NEED FTP access too on port 21. I have searched everywhere but can not find any setting to enable it. This is the Plesk version info: Parallels Plesk Panel version 9.5.2 Operating system Linux 2.6.26.8-57.fc8 CPU GenuineIntel, Intel(R) Pentium(R) 4 CPU 3.00GHz Any help is appreciated. [EDIT]: The firewall is not blocking it. I have checked it on server and there are absolutely no blocking rule. Firewall states: All incoming/outgoing connections are accepted on FTP And on client-side (my PC), I can connect to other FTP servers so this is not an issue in my PC's firewall. Moreover, I can not even connect to the FTP from online FTP clients like net2ftp.

    Read the article

  • OpenVPN bridge network from routed clients

    - by gphilip
    I have the following setup: subnet 1 - 10.0.1.0/24 with a machine used as NAT and also running an OpenVPN client subnet 2 - 192.168.1/24 with an OpenVPN server (the server in subnet 1 connect here) subnet 3 - 10.0.2.0/24 that uses the NAT machine (subnet 1) to access the internet, so all non-local traffic is routed there to the eth0 interface The OpenVPN client creates the tun0 interface and appropriate routing so that I can access machines from 192.168.1/24 [root@ip-10-0-1-208 ~]# telnet 192.168.1.186 8081 Trying 192.168.1.186... Connected to 192.168.1.186. Escape character is '^]'. [root@ip-10-0-1-208 ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 10.0.1.1 0.0.0.0 UG 0 0 0 eth0 10.0.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 10.8.0.1 10.8.0.5 255.255.255.255 UGH 0 0 0 tun0 10.8.0.5 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 169.254.169.254 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 192.168.0.0 10.8.0.5 255.255.0.0 UG 0 0 0 tun0 However, when I try the same from subnet 3, it can't reach that machine. [root@ip-10-0-2-61 ~]# telnet 192.168.1.186 8081 Trying 192.168.1.186... I suspect that it's because subnet 3 is routed to eth0 on the NAT machine in subnet 1 and it cannot jump to tun0. What's the easiest way to resolve it? I don't want to use iptables. I can't change the routing from machines in subnet 1 because it's done in AWS and so it works only with specific interfaces. Also, the NAT machine gets its IP with DHCP and so bridging is a bit complicated. IP forwarding is set on the NAT machine [root@ip-10-0-1-208 ~]# cat /proc/sys/net/ipv4/ip_forward 1 Thank you!

    Read the article

  • Web Hosting: Any web host that supports files more than 50,000 in number?

    - by Devner
    Hi all, For my PHP & mySQL based application, I am trying to buy website hosting from a host who does not have a limit on the number of files I carry in my hosting account. Almost all the websites have a common limit of 50,000 files (some websites call it 50,000 nodes). The rest(to the extent of my search) are not even close. I have gone through the various websites, Googled lot of information, have spoken with the customer service of the hosting companies and they said that they have a limit of 50,000 files and that's why they call it the LIMIT. Now I have my application, which is a kind of social networking website, where people can upload various files of varying file size. So say if 50,000 users were to join the website and upload 1 file each, the limit of 50,000 will be reached very easily and my 50,001 customer will start facing file upload problems (& so will my account). So I would like to know if there's any website hosting services that do NOT levy such restrictions. In summary, I need the following options: No maximum file limit (more than 50,000 files in account). No maximum file upload limit in server setting (10MB, 12MB, 15MB, 20MB, etc.). Ability to upload files of various types (zip, flv, jg, png, etc.). Ability to stream Audio and Video (live audio & video not necessary). Access to .htaccess Access to php.ini, my.cnf or my.ini (this would be a plus) Supports SSL. Provides dedicated hosting(& IP) as well. Monthly payments without contracts are a plus. If you know of any such website hosting services, please post a reply ( a link to the same will be appreciated ). Thank you.

    Read the article

  • Family server setup [closed]

    - by Manny
    Hi all, I really hope some of you can give me some direction. I have setup a linux server at home and through samba I can access files from different computers in my home. I would like to use this server as a file-server for my family (brothers, sisters and parents who all live in their own homes). I really like the way it is set up right now with user and permission controls, but I've read that it is bad idea to open up the samba port to the world. The requirements are simple: 1) it should be easy to access, by using standard web browsers or mounting the drive (shouldn't have to use any VPN setup or use putty etc) 2) should be somewhat secure. We just want to share family pictures instead of putting them on facebook or picasa or other web server, nothing top secret. Here is what I've looked into: 1)Webdav. It seems decent but seems like it windows7 doesn't like it very much, even with digest mode authentication. User controls and permissions are not as flexible as samba (or at least to my knowledge). I really like the user and group permissions in samba, but if I could live with webdav if it worked seamlessly with windows, it should just work shouldn't it? 2) I read somewhere to stay away from ftp as it is outdated and that there are newer and better internet file-server setups? Was that a reference to webdav? I am so confused, please help... Manny

    Read the article

  • Instructions to setup primary and only domain controller

    - by Robert Koritnik
    Where could I get best step by step instructions (with some simple explanations) how to setup domain controller on Windows Server 2008 R2 Server Core? I don't know what do I need? Do I need DNS as well and AD and so on and so forth. I don't know enough about these things, but I need to set them up to prepare development environment. I would also like to know how to configure firewall on DC machine, to make it visible on other machines because I've setup DC somehow but I can't connect to it... This is my HW config: Linksys internet router with DHCP my dev machine is Windows 7 my DC machine is a VM in my dev machine my dev machine has a hw network adapter to linksys and a virtual network adapter to DC DC machine has two network adapters: one to linksys (to be internet connected so it can be updated etc.) and one to host (my dev Win7 machine) Edit My development machine should access domain controller and logon using domain credentials. Development machine would access internet directly via Linksys router. My domain controller machine would only serve authentication (and if I'm able to configure it right) should also have Active Directory Federation Services in a workable condition. I hope this is a bit more clear now. At least a small bit.

    Read the article

  • Extending ext4 partition on debian7.0 on vsphere

    - by VoidPointer
    I have allocated thin provisioning of 15GB when i found 8GB as insufficient. Now debian guest is not able to recognize the change of size. root@debian7-x64:~# lvdisplay --- Logical volume --- LV Path /dev/debian7-x64/root LV Name root VG Name debian7-x64 LV UUID EU6mg0-XTXC-ci3D-bQJi-7XN6-r8Hp-SYxcj0 LV Write Access read/write LV Creation host, time debian7-x64, 2013-06-25 12:02:49 +0530 LV Status available # open 1 LV Size 7.39 GiB Current LE 1892 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 254:0 --- Logical volume --- LV Path /dev/debian7-x64/swap_1 LV Name swap_1 VG Name debian7-x64 LV UUID xDNtoz-tJUq-M5D6-GGCN-gzcD-fwUv-fYYDR1 LV Write Access read/write LV Creation host, time debian7-x64, 2013-06-25 12:02:49 +0530 LV Status available # open 2 LV Size 376.00 MiB Current LE 94 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 254:1 root@debian7-x64:~# pvdisplay --- Physical volume --- PV Name /dev/sda5 VG Name debian7-x64 PV Size 7.76 GiB / not usable 2.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 1986 Free PE 0 Allocated PE 1986 PV UUID SehkzH-Gq8Y-jI2f-27Tb-uv1Z-tR1R-5OnTxR root@debian7-x64:~# sfdisk -s /dev/sda: 15728640 /dev/mapper/debian7--x64-root: 7749632 /dev/mapper/debian7--x64-swap_1: 385024 total: 23863296 blocks Help me to extend this partition. No problem in rebooting. I dont have any live CD. Environment : debian 7, with lvm, on vsphere, ext4 partition. Can provide more details when needed.

    Read the article

  • Connect two networks

    - by Meek Barrios
    Connecting two different offices with a wireless link and linux boxes. Hardware: 2 CISCO RV42, 2 Dual Homed Linux Boxes running debian, 2 2Wire and 2 AirMax 5 Configuration is: Office A LAN A (10.1.1.0/24) -> RV42 A (WAN1 - 10.1.1.254) -> 2Wire A (Internet) LINUX A ( ETH0 (LAN) 10.1.1.253, ETH1 (LINK) (10.1.3.3) Wireless Link --- AirMax A <-> AirMax B connected as Wireless Bridge Office B LAN B (10.1.2.0/24) -> RV42 B (WAN1 - 10.1.2.254) -> 2Wire B (Internet) LINUX B ( ETH0 (LAN) 10.1.2.253 -> ETH1 (LINK) (10.1.3.4) Network configuration is: LAN A - Default Gateway 10.1.1.254 RV42 A - Static Route 10.1.3.0/24 on 10.1.1.253 Static Route 10.1.2.0/24 on 10.1.1.253 Default on 192.168.1.1 (WAN1 Internet Access) Linux A - ETH0 10.1.1.253 netmask 255.255.255.0 gw 10.1.1.254 ETH1 10.1.3.3 netmask 255.255.255.0 gw 10.1.3.1 AIRMAX A - 10.1.3.1 netmask 255.255.255.0 gw 10.1.3.1 LAN B - Default Gateway 10.1.2.254 RV42 B - Static Route 10.1.3.0/24 on 10.1.2.253 Static Route 10.1.1.0/24 on 10.1.2.253 Default on 192.168.1.1 (WAN1 Internet Access) Linux B - ETH0 10.1.2.253 netmask 255.255.255.0 gw 10.1.2.254 ETH1 10.1.3.4 netmask 255.255.255.0 gw 10.1.3.2 AIRMAX B - 10.1.3.2 netmask 255.255.255.0 gw 10.1.3.2 Both linux have ip_forward set to 1 and the following on the iptables: iptables -F iptables -X iptables -P FORWARD ACCEPT iptables -P INPUT ACCEPT iptables -P OUTPUT ACCEPT I can ping from Linux B any ip on 10.1.1.0/24 segment and on linux A any ip on 10.1.2.0/24 segment however I cannot connect to HTTP or FTP on those machines. From LAN A I cannot see any other network. I'm looking for some advice for this configuration or a better solution. Regards

    Read the article

  • Windows 8 Install Hanging at first white-font boot splash

    - by Omega
    I'm trying to install the Windows 8 preview on my Samsung Series 9 (2012, Ivy Bridge). I've done a bit of a custom scheme with this one: I'm using EFI/UEFI on this system. I've seen no indication that this system supports secure boot (yay!) My SSD is set up with GPT Ubuntu is already installed and working great via UEFI. I'm trying to boot the Windows 8 install from a USB stick via UEFI I don't have access to a CD drive. The problem is that the boot seems to hang at the very first splash screen that looks like this. White windows font, the little beads don't show up. My USB stick has an activity light and it does blink for the first few seconds, but then goes back to it's "nobody is talking to me" idle pulse. What I know: UEFI booting is definitely working. Windows 8 for those few seconds seems to have some kind of access to the USB drive. My Series 9 is running the latest BIOS/firmware. Any idea what I might be able to do to get Windows 8 installed??

    Read the article

  • How can I install .NET framework 3.5 on XP machines without internet connection?

    - by EricSchaefer
    I want to install .NET framework 3.5 on a couple of machines that do not have internet access. If I install the "no internet access"-package it still wants to download something. How can I figure out what is missing? Are there other installation packages? Edit: I would present screenshots but I cannot upload anything from here and the shots would be in german. So I present only the text translated back to english... Installing the "full redistributable package": At the bottom of the license agreement page it display this text: Size of download file: 67 MB Appoximate download time: 2h 44min (56KBit/s) 18min (512KBit/s) It shows the text even if I installed Windows Installer 3.1. After agreeing it displays the "Download and Installation Status"-Dialog with a progress bar labeled "Download:" and Status: Connection to server attempted (try X of 5). Total Download Status: 56MB/67MB I tried it in a VM with no network connection. It tries 5 times while the progress bar shows progress. Later the progress bar is labeled "Installation:". Even later it reports problems during setup and provides two buttons "Send Report Later" and "Don't Send". Now here it comes: "Setup completed" and "Microsoft .NET Framework 3.5 has been deinstalled successfully." (Emphasis is mine) "It is recommended to install current service packs and security updates. More information at Windows Update (link)." Edit2: Installed Service Pack 3, but still no success.

    Read the article

  • User-unique .vimrc file for servers as root user

    - by Scott
    I'm getting thrown into an IDE war at the office, where multiple users have root access on our servers, and like to have everything their own way with VIM. Unfortunately, we have our servers locked down enough to where if you want to do anything, you need to have root access. Obviously (although this is obviously frowned upon), we get tired of typing sudo before each command we type, which would require that we constantly type in our wonderfully complex passwords that are mandated on us over and over again, so naturally we all just execute the sudo su - command upon login to avoid all of this. Of course, when it comes to VIM and custom .vimrc files, we are often times stepping on someone else's custom .vimrc file, and we have some whacked out functionality in these files that users have that may overwrite functionality that we have no idea about, much less have the patience to learn either. When as root on a linux box, is there any way for all of us to still maintain our .vimrc file without having to overwrite the file over and over again every time someone wants to use VIM? Ideally, we have many virtual machines all with VIM installed, so a universal solution across all servers would be best, and we do have our Microsoft Windows user specific home directories mounted on the servers under /home/username. Any recommendations for accommodating this?

    Read the article

  • Clustered MSDTC

    - by niel
    Hi I'm setting up a SQL cluster (SQL 2008), Windows 2008 R2. I enable the network access on local dtc and then create a DTC resource in my cluster . the problem is that when i start up the resource it does nto pull through my settings to enable network access. the log shows this: MSDTC started with the following settings: Security Configuration (OFF = 0 and ON = 1): Allow Remote Administrator = 0, Network Clients = 0, Trasaction Manager Communication: Allow Inbound Transactions = 0, Allow Outbound Transactions = 0, Transaction Internet Protocol (TIP) = 0, Enable XA Transactions = 0, Enable SNA LU 6.2 Transactions = 1, MSDTC Communications Security = Mutual Authentication Required, Account = NT AUTHORITY\NetworkService, Firewall Exclusion Detected = 0 Transaction Bridge Installed = 0 Filtering Duplicate Events = 1 where when i restart the local dtc service it says this: Security Configuration (OFF = 0 and ON = 1): Allow Remote Administrator = 0, Network Clients = 1, Trasaction Manager Communication: Allow Inbound Transactions = 1, Allow Outbound Transactions = 1, Transaction Internet Protocol (TIP) = 0, Enable XA Transactions = 1, Enable SNA LU 6.2 Transactions = 1, MSDTC Communications Security = No Authentication Required, Account = NT AUTHORITY\NetworkService, Firewall Exclusion Detected = 0 Transaction Bridge Installed = 0 Filtering Duplicate Events = 1 settings on both nodes in teh cluster is the same. I have reinstalled and restarted to many times to mention. Any ideas ?

    Read the article

  • "Ethernet" doesn't have a valid IP configuration

    - by Xuzuno
    I'm using an ethernet cord to connect to my internet and it has been working well until Thursday morning when I turned on my laptop (Windows 8) to see a yellow triangle sign in the bottom right hand corner, in front of the ethernet connected symbol. Since then I haven't been able been able to access the internet from my computer. When I hover over it, it says that it is an "Unidentified network" and there is "No internet access". I've run the Windows 8 troubleshooting and it says that the problem found was ""Ethernet" doesn't have a valid IP configuration", but I'm unsure how to fix it. I'm thinking that the problem is to do with my computer rather than my network, because I've tried another laptop (Windows 7) through the same ethernet cable and connection and the internet works fine on the other laptop. I've tried so many fixes that I've found online, with none of them actually working. Yesterday I even tried a full system reset, where I re-installed Windows 8, re-partitioned and wiped everything off the hard drive, but it still appears have the exact same problem. Today I also purchased and tried a new ethernet cable which didn't work, so I then purchased a USB to Ethernet adapter, to make sure that it wasn't my ethernet port on my laptop that was faulty. That didn't work either, and the same problem still remains. I feel like I've tried everything, so can someone please help me?

    Read the article

  • Array on servers which receive several hundred GB of data a day

    - by Matthew
    This is hopefully a simple question. Right now we are deploying servers which will serve as data warehouses. I know with raid 5 the best practice is 6 disks per raid 5. However, our plan is to use RAID 10 (both for performance and safety). We have a total of 14 disks (16 actually, but two are being used for OS). Keeping in mind that performance is very much an issue, which is better - doing several raid 1's? Do one large raid 10? One large raid 10 had been our original plan, but I want to see if anyone has any opinions I haven't thought of. Please note: This system was designed for using Raid 1+0, so losing half of the raw storage capacity is not an issue. Sorry i hadn't mentioned that initially. The concern is more whether or not we want to use one large Raid 1+0 containing all 14 disks, or several smaller raid 1+0's and then stripe across them using LVM. I know the best practice for higher raid levels is to never use more than 6 disks in an array.

    Read the article

  • nginx redirect what is not coming from load balancing

    - by dawez
    I have nginx on SERVER1 that is acting as load balancing between SERVER1 and SERVER2 in SERVER1 I have the upstreams for the load balancing defined as : upstream de.server.com { # similar upstreams defined also for other languages # SELF SERVER1 server 127.0.0.1:8082 weight=3 max_fails=3 fail_timeout=2; # other SERVER2 server otherserverip:8082 max_fails=3 fail_timeout=2; } The load balancing config on SERVER1 is this one: server { listen 80; server_name ~^(?<LANG>de|es|fr)\.server\.com; location / { proxy_pass http://$LANG.server.com; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # trying to pass a variable in the header to SERVER2 proxy_set_header Is-From-Load-Balancer 1; } } Then in server 2 I have: server { listen 8082; server_name localhost; root /var/www/server.com/public; # test output values add_header testloadbalancer $http_is_from_load_balancer; add_header testloadbalancer2 not_load_bal; ## other stuff here to process the request } I can see the "testloadbalancer" in the response header is set to 1 when the request is coming from the load balancing, it is not present when from a direct access: SERVER2:8082 . I would like to bounce back to the SERVER1 all the direct requests that are sent to SERVER2, but keep the ones from the load balancing. So this should forbid direct access to SERVER2:8082 and redirect to SERVER1:80 .

    Read the article

  • what means parameter -mailboxcredenctial

    - by cotablise
    H3llo, I am writing regarding the Exchange powershell commands. When I want to use following cmdlets, I have to insert parameter -mailboxcredential Test-OwaConnectivity Test-OutlookWebServices Test-ImapConnectivity Test-PopConnectivity In the Microsoft official site is written: "The MailboxCredential parameter specifies the mailbox credential for a single URL test." I am not sure why this parameter is needed... I inserted incorrect credentials, however the command was finished successfully... Could you tell me reason why this parameter is needed ? Example: Wrong/incorrect credential [PS] C:\>Test-WebServicesConnectivity -ClientAccessServer EXhub1 -MailboxCredential (Get-Credential blablabla) CasServer LocalSite Scenario Result Latency(MS) Error --------- --------- -------- ------ ----------- ----- EXhub1 Default-Fi... GetFolder Failure [System.Net.WebExcept... Without parameter: [PS] C:\>Test-WebServicesConnectivity -ClientAccessServer EXhub1 WARNING: Test user 'extest_91ef41d34eef4' isn't accessible, so this cmdlet won't be able to test Client Access server connectivity. Could not find or sign in with user ********\extest_91ef41d34eef4. If this task is being run without credentials, sign in as a Domain Administrator, and then run Scripts\new-TestCasConnectivityUser.ps1 to verify that the user exists on Mailbox server EXHUB1.****** + CategoryInfo : ObjectNotFound: (:) [Test-WebServicesConnectivity], CasHealthCouldN...edInfoException + FullyQualifiedErrorId : FB9A14B6,Microsoft.Exchange.Monitoring.TestWebServicesConnectivity WARNING: No Client Access servers were tested. Thank you in advance

    Read the article

  • Family server setup

    - by Manny
    Hi all, I really hope some of you can give me some direction. I have setup a linux server at home and through samba I can access files from different computers in my home. I would like to use this server as a file-server for my family (brothers, sisters and parents who all live in their own homes). I really like the way it is set up right now with user and permission controls, but I've read that it is bad idea to open up the samba port to the world. The requirements are simple: 1) it should be easy to access, by using standard web browsers or mounting the drive (shouldn't have to use any VPN setup or use putty etc) 2) should be somewhat secure. We just want to share family pictures instead of putting them on facebook or picasa or other web server, nothing top secret. Here is what I've looked into: 1)Webdav. It seems decent but seems like it windows7 doesn't like it very much, even with digest mode authentication. User controls and permissions are not as flexible as samba (or at least to my knowledge). I really like the user and group permissions in samba, but if I could live with webdav if it worked seamlessly with windows, it should just work shouldn't it? 2) I read somewhere to stay away from ftp as it is outdated and that there are newer and better internet file-server setups? Was that a reference to webdav? I am so confused, please help... Manny

    Read the article

  • Device CAL, User Cal or Processor license needed for SQL 2008 (architecture explained inside)?

    - by nycgags
    So we have a number of servers in the Amazon cloud running SQL Server Standard edition to aggregate data. For that purpose we are fine, the licensing is handled by our contract with Amazon, no problem there. For the beefier work, we want to install Enterprise Edition (EE) on our servers processing raw data so that we can take advantage of table partitioning. We currently have 3 servers aggregating data from about 40 node servers, all 43 of these servers are running standard edition which is fine. We also have 4 servers running standard processing the raw data, but I think we can get away with 2 (for redundancy) running Enterprise Edition. We have 2-3 dba's that access these DW servers for maintenance (using the same windows login via remote desktop). So visually: 40 -- 3 -- [2] -- 2 -- 1 nodes -- aggregators -- raw (which we want to run EE) -- calculators -- datawarehouse Nodes PUSH to aggregators, Raws PULL from aggregators, Calculators PULL from Raw, Calculators PUSH to datwarehouse I am specifying the push vs. pull in case that changes how the # of licenses is calculated. Q1) how many device (or user) CAL's do we need? Q2) do I need to speak with someone from MSFT to find out if it is ok to install in the Amazon Cloud (Amazon said we need to verify it is ok in our license terms)? Q3) what happens if another device tries to access a server with the limited number of device CAL's? Q4) Are the device CAL's simultaneous number of devices or total? Q5) Do Device and User CAL's cost the same or is there a difference? Q6) Would we need to buy a processor license (we are hoping not to)?

    Read the article

  • nginx proxying different servers for different subdomains

    - by The.Anti.9
    i just set up an nginx server. On the same computer as nginx, I have apache running on port 8000 (this was previously set up.) and I want no subdomain and the www. subdomain to go to the local apache instance. But i want the stuff. subdomain to link to my server where i keep all my miscellaneous files (pictures, documents, etc.), which is also listening on port 80 at the ip 192.168.1.102. I tried configuring it, but when i go to my domain, I just get the "Welcome to nginx!". Here's what I have: user www-data; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; tcp_nodelay on; gzip on; include /etc/nginx/conf.d/*.conf; server { listen 80; server_name theanti9.com www.theanti9.com; access_log /var/log/nginx/access.log; location / { proxy_pass http://localhost:8000; } } server { listen 80; server_name stuff.theanti9.com; access_log /var/log/nginx/access.log; location / { proxy_pass http://192.168.1.102:80; } } } I'm not really sure what's wrong. Any suggestions?

    Read the article

  • Apache: Isn't chmod 755 enough to set up symlink or alias on Apache httpd on Mac OS 10.5?

    - by eed3si9n
    On my Mac OS 10.5 machine, I would like to set up a subfolder of ~/Documents like ~/Documents/foo/html to be http://localhost/foo. The first thing I thought of doing is using Alias as follows: Alias /foo /Users/someone/Documents/foo/html <Directory "/Users/someone/Documents/foo/html"> Options Indexes FollowSymLinks MultiViews Order allow,deny Allow from all </Directory> This got me 403 Forbidden. In the error_log I got: [error] [client ::1] (13)Permission denied: access to /foo denied The subfolder in question has chmod 755 access. I've tried specifying likes like http://localhost/foo/test.php, but that didn't work either. Next, I tried the symlink route. Went into /Library/WebServer/Documents and made a symlink to ~/Documents/foo/html. The document root has Options Indexes FollowSymLinks MultiViews This still got me 403 Forbidden: Symbolic link not allowed or link target not accessible: /Library/WebServer/Documents/foo What else do I need to set this up? Solution: $ chmod 755 ~/Documents In general, the folder to be shared and all of its ancestor folder needs to be viewable by the www service user.

    Read the article

  • How can I create a VLAN on my extreme switch for a seperate subnet/domain?

    - by drpcken
    I'm putting together a small active directory implementation for a buddy of mine. I currently have 2 servers (one is the primary domain controller) and a couple clients. I need to test and run updates on every machine on this domain, but I would have plug them into my current LIVE domain to get it internet access. From what I've read having two separate domains on a single subnet is a bad idea (even though it is temporary) so I don't want to risk messing anything up on my production domain. I'm pretty sure I can create a separate VLAN on my extreme 48 port switch and plug this smaller domain into it on a different subnet, but I don't know the commands. Both subnets would need internet access of course (one of the things I can't wrap my head around is routing internet traffic between subnets (gateway is on production subnet). My production domain is on subnet 192.168.200.0. My new domain I want to put online would go into subnet 192.168.10.0. A shove in the right direction would be greatly appreciated. Thank you!

    Read the article

< Previous Page | 528 529 530 531 532 533 534 535 536 537 538 539  | Next Page >