Search Results

Search found 20990 results on 840 pages for 'static block'.

Page 293/840 | < Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >

  • DHCP Requests Failing

    - by Jon Rauschenberger
    Clients on our network recently started receiving this error when attemping to acquire an IP Address from our DHCP Server: "the name specified in the network control block (ncb) is in use on a remote adapter" The DHCP Server is a Windows Server 2008 R2 machine, most of the client are Win 7. Can't find much on that error, anyone have an idea what could cause it? Thanks, jon

    Read the article

  • Ubuntu 12.04 Server - eth0 1Gbps NIC eth1 10Gbps NIC - all traffic using eth0?

    - by James
    Ubuntu Server 12.04.1 x64 Primary role is an NFS fileserver, for Mac OSX Clients. Hardware: Eth0: 00:19.0 Ethernet controller: Intel Corporation 82579V Gigabit Network Connection (rev 04) Eth1: 07:00.0 Ethernet controller: MYRICOM Inc. Myri-10G Dual-Protocol NIC Config: ifconfig eth0 Link encap:Ethernet HWaddr <MACADDRESS> inet addr:192.168.0.150 Bcast:192.168.0.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:460042020 errors:0 dropped:148 overruns:0 frame:0 TX packets:231906707 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:581431978417 (581.4 GB) TX bytes:259057368617 (259.0 GB) Interrupt:20 Memory:f7d00000-f7d20000 eth1 Link encap:Ethernet HWaddr <MACADDRESS> inet addr:192.168.0.100 Bcast:192.168.0.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:6832208 errors:0 dropped:2 overruns:0 frame:0 TX packets:376 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:513826442 (513.8 MB) TX bytes:33688 (33.6 KB) Interrupt:59 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:507 errors:0 dropped:0 overruns:0 frame:0 TX packets:507 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:45057 (45.0 KB) TX bytes:45057 (45.0 KB) nano /etc/network/interfaces #The loopback network interface auto lo iface lo inet loopback #The primary network interface auto eth0 iface eth0 inet static address 192.168.0.150 netmask 255.255.255.0 network 192.168.0.0 broadcast 192.168.0.255 gateway 192.168.0.1 dns-nameservers 192.168.0.1 8.8.8.8 #second network interface auto eth1 iface eth1 inet static address 192.168.0.100 netmask 255.255.255.0 network 192.168.0.0 broadcast 192.168.0.255 gateway 192.168.0.1 dns-nameservers 192.168.0.1 8.8.8.8 Currently I am using on the OSX clients: nfs://192.168.0.100/Volumes/Storage to mount the NFS share. My problem is why would all the data (and I have checked using various monitoring tools bmon, iftop, glances, etc) be going over the slower connection?? Also, after configuring /etc/network/interfaces with the above setup I always get an error message at bootup something about waiting for network configuration. Are these connected?

    Read the article

  • How to give a user NTFS rights to a folder, via Powershell

    - by Don
    I'm trying to build a script that will create a folder for a new user on our file server. Then take the inherited rights away from that folder and add specific rights back in. I have it successfully adding the folder (if i give it a static entry in the script), giving domain admin rights, removing inheritance, etc...but i'm having trouble getting it to use a variable I set as the user. I don't want there to be a static user each time, I want to be able to run this script, have it ask me for a username, it then goes out and creates the folder, then gives that same user full rights to that folder based on the username i've supplied it. I can use Smithd as a user, like this: New-Item \\fileserver\home$\Smithd –Type Directory But can't get it to reference the user like this: New-Item \\fileserver\home$\$username –Type Directory Here's what i have: Creating a new folder and setting NTFS permissions. $username = read-host -prompt "Enter User Name" New-Item \\\fileserver\home$\$username –Type Directory Get-Acl \\\fileserver\home$\$username $acl = Get-Acl \\\fileserver\home$\$username $acl.SetAccessRuleProtection($True, $False) $rule = New-Object System.Security.AccessControl.FileSystemAccessRule("Administrators","FullControl", "ContainerInherit, ObjectInherit", "None", "Allow") $acl.AddAccessRule($rule) $rule = New-Object System.Security.AccessControl.FileSystemAccessRule("Domain\Domain Admins","FullControl", "ContainerInherit, ObjectInherit", "None", "Allow") $acl.AddAccessRule($rule) $rule = New-Object System.Security.AccessControl.FileSystemAccessRule("Domain\"+$username,"FullControl", "ContainerInherit, ObjectInherit", "None", "Allow") $acl.AddAccessRule($rule) Set-Acl \\\fileserver\home$\$username $acl I've tried several ways to get it to work, but no luck. Any ideas or suggestions would be welcome, thanks.

    Read the article

  • client flips between internal and external IP addresses??

    - by jmiller-miramontes
    I have what seems like a not-particularly-complicated home network, all things considered: a DSL line comes in to a modem/router, which goes off to a switch, which supports a bunch of machines. My machines live in a 192.168.0.x address space; however, I'm running some public servers on the network, so I have a block of 8 (5, really) static IP addresses that are mapped to the servers by the router. The non-servers get 192.168.0.x addresses via NAT; some machines have static addresses and some get addresses from DHCP. Locally, I'm running a DNS server (named) to map between the domain names and the 192.168 address space. Somewhat messy, but everything basically works. Except: One of my local non-server clients occasionally switches from its internal address to its external address. That is, if I check the logs of a website I'm running internally, the hits coming from this client sometimes show up with the internal 192.168 address, and sometimes with the external (216.103...) address. It will flip back and forth for no apparent reason, without my doing anything. This can be a problem in terms of how the clients interact with the way I have some of the clients' SSH systems configured (e.g., allowing access from the internal network but not the external network), but it also Just Seems Wrong. I will confess that I'm kinda skating on the very edge of my networking competence here, but I can't for the life of me figure out what's going on. If it helps, the client in question is running Mac OS X / 10.6; its address is statically assigned, is not one of the five externally-accessible addresses, and gets its DNS from (first) the internal DNS server and (second) my ISP's DNS servers. I can't swear that none of the other NAT clients are also showing this problem; the one I'm dealing with is my everyday machine, so this is where I run into it. Does anybody out there have any advice? This is driving me crazy...

    Read the article

  • CF9: what is jrun_iis6_wildcard.dll intended for?

    - by Stefano
    Hi I just finished installing an application on a server running ColdFusion 9 (2008 R2 64bit) The application I installed does not use CF, but an isapi dll. To run my application, I need to delete an handler using jrun_iis6_wildcard.dll that seems to process and block requests that should be processed by my isapi dll. I do not known Cold Fusion, but I'm curious: what is jrun_iis6_wildcard.dll intended for? thank you in advance stefano

    Read the article

  • How to configure a Router (TL-WR1043ND) to work in WDS mode?

    - by LanceBaynes
    I have a WRT160NL router (192.168.1.0/24 - OpenWrt 10.04) as AP. It's: - WAN port: connected to the ISP - WLAN: working as an AP, using 64 bit WEP/SSID: "MYWORKINGSSID", channel 5, using password: "MYPASSWORDHERE" - It's IP Address is: 192.168.1.1 Ok! It's working great! But: I have a TL-WR1043ND router that I want to configure as a "WDS". (My purpose is to extend the wireless range of the original WRT160NL.) Here is how I configure the TL-WR1043ND: 1) I enable WDS bridging. 2) In the "Survey" I select my already working network. 3) I set up the encryption (exact same like the already working one) 4) I choose channel 5 5) I type the SSID 6) I disable the DHCP server on it. After I reboot the router and connect to this router (TL-WR1043ND) over wireless I'm trying to ping google.com. From the ping I see that I can reach this router, that's ok, but it seems like that this router can't connect to the original one, the WRT160NL (so I don't get ping reply from Google). The encryption settings/password is good I checked it many-many-many times. what could be the problem? I'm thinking it could be a routing problem, but what should I add to the "Static Routing" menu? I tried to change the IP address of the TL-WR1043ND to: 192.168.1.2 So if this a routing issue then I should add a static routing rule that says: If destination: any then forward the packet to 192.168.1.1 p.s.: I updated the Firmware to the latest version. It's still the same. p.s.2: The HW version of the TL-WR1043ND is 1.8 p.s.3: Could that be the problem that I use different routers? (If I would buy.. another TL-WR1043ND and use it instead of the WRT160NL, and with normal Firmware, not OpenWrt, then it would work?? The "WDS" is different on different routers?) p.s.4: I will try to check the router logs@night - and paste it here! :\

    Read the article

  • mod_wsgi, .htaccess and rewriterule

    - by hadaraz
    I'm using several django projects running on the same apache instance through mod_wsgi, configured with virtualhost for each site, see the httpd.conf here. For one of the sites I want to use static-cache (staticgenerator), so I set up a directory with .htaccess file which contains: RequestHeader unset X-Forwarded-Host RewriteEngine on RewriteBase / RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME}/index.html !-f RewriteRule ^(.*) http://127.0.0.1:3456/$1 [P] where 3456 is the django port on the server. Using this rewrite rule, the request is always forwarded to the mod_wsgi handler, even if the file or directory exists, and if the file index.html exists the request shows as request-path/index.html. I tried another setup: RequestHeader unset X-Forwarded-Host RewriteEngine on RewriteBase / RewriteCond $1 !-d RewriteCond $1index.html !-f RewriteRule ^(.*) http://127.0.0.1:3456/$1 [P] but got almost the same results. All requests are transferred to the mod_wsgi handler, but the request path is now the original one. To sum it up: What is the correct RewriteCond to use here? How do you transfer a request to the mod_wsgi handler? Is it the right way? If that's not the way to do it, then how do you serve static files from a directory when they exist, and when they don't you serve from apache/mode_wsgi? Thanks for your help.

    Read the article

  • pfsense single MAC is listed with several IP's in ARP table

    - by Tillebeck
    I have this problem: arp table filling up But I am quite sure that I cannot blame Kaspersky. Scenarie: a user plugs his computer in. He waits and waits but are getting no IP by DHCP. Then he is told there is an IP conflict... He end up assigning himself a static IP to access the net In the ARP table of the router I see: 192.168.24.144 00:16:41:42:3c:9e Lenovo LAN 192.168.24.145 00:16:41:42:3c:9e Lenovo LAN 192.168.24.181 00:16:41:42:3c:9e Lenovo LAN 192.168.24.150 00:16:41:42:3c:9e Lenovo LAN 192.168.24.151 00:16:41:42:3c:9e Lenovo LAN 192.168.24.152 00:16:41:42:3c:9e Lenovo LAN 192.168.24.156 00:16:41:42:3c:9e Lenovo LAN 192.168.24.157 00:16:41:42:3c:9e Lenovo LAN 192.168.24.159 00:16:41:42:3c:9e Lenovo LAN 192.168.24.160 00:16:41:42:3c:9e Lenovo LAN 192.168.24.130 00:16:41:42:3c:9e Lenovo LAN 192.168.24.132 00:16:41:42:3c:9e Lenovo LAN 192.168.24.164 00:16:41:42:3c:9e Lenovo LAN 192.168.24.137 00:16:41:42:3c:9e Lenovo LAN 192.168.24.140 00:16:41:42:3c:9e Lenovo LAN 192.168.24.206 00:16:41:42:3c:9e Lenovo LAN The last .206 is the static address he gave himself. Several users descripe the exact same problem. It started after removing some filters in the switches, så all users are on a LAN and can see each other. Before, when filters blocked access to each others comptuers no one reported this kind of behavior. Any idéeas?

    Read the article

  • Can't connect to a Hyper-V VM from anywhere but the host OS

    - by Elbelcho
    I have an unusual situation on hand where I'm able to connect to a Hyper-V guest VM from the HOST, but not from anywhere but the host. The VM is running WIn2k8R2 and has IIS installed and Remote Desktop enabled. If I browse to the IP from the host OS, the IIS7 page displays. I can also RDP into the guest OS from the host as well as ping. From OFF the host, RDP, web and ping all fail. If I completely shut off the guest VM's firewall, ping will then start to respond, but all RDP and port 80 still don't. The physical host machine has 2 nics installed, but only one is plugged in. The one plugged in has a static IP. I have one Hyper-V virtual network and it's set to external. The guest VM has one NIC with a different static IP than the host, but both are on the same subnet. The host machine is joined to the domain, the guest VM is not. Any sugestions? Thanks so much for any help you may be able to provide!

    Read the article

  • How to redirect all Internet traffic to OpenVPN Server

    - by JuliaS
    I have seen working solutions around the issue of forcing Internet traffic to go through the OpenVPN server but they are all done in Linux, all I want to know is how to add an entry to the route table in windows to make this happen. connectivity between the client and server is fine, my Windows 7 client can establish a connection to the Windows 2008 Server, but when established Internet traffic is still going from the local Windows 7 machine. Here are the details: Server: Windows 2008 Server with one NIC OpenVPN IP Address: 192.168.0.1 Local NIC IP Address (connects the server to the Internet): 10.242.69.107 Client: Windows 7 with one NIC OpenVPN IP Address: 192.168.0.2 ISP allocated IP Address: 10.0.8.2 (gateway 10.0.8.1) Server OpenVPN Config: dev tun ifconfig 192.168.0.1 192.168.0.2 secret static.key push "redirect-gateway def1" Client OpenVPN Config: remote xxx.xxx.com dev tun ifconfig 192.168.0.2 192.168.0.1 secret static.key I'm not an expert with adding routes...etc. I would be grateful if someone could let me know how to add this entry in my server/client route table. EDIT: Output from the client's netstat -rnv IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 10.0.8.1 10.0.8.2 20 10.0.8.0 255.255.255.252 On-link 10.0.8.2 276 10.0.8.2 255.255.255.255 On-link 10.0.8.2 276 10.0.8.3 255.255.255.255 On-link 10.0.8.2 276 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 192.168.0.0 255.255.255.252 On-link 192.168.0.2 286 192.168.0.2 255.255.255.255 On-link 192.168.0.2 286 192.168.0.3 255.255.255.255 On-link 192.168.0.2 286 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 10.0.8.2 276 224.0.0.0 240.0.0.0 On-link 192.168.0.2 286 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 10.0.8.2 276 255.255.255.255 255.255.255.255 On-link 192.168.0.2 286 ===========================================================================

    Read the article

  • Failed to bring up eth1 in a dual ips solution in ubuntu

    - by lxyu
    I'm using ubuntu 12.04. I tried to assign two ips to two ethernet cards in my server. The content of /etc/network/interfaces is like this: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 114.80.156.a netmask 255.255.255.224 gateway 114.80.156.b auto eth1 iface eth1 inet static address 114.80.156.c netmask 255.255.255.240 gateway 114.80.156.d a b c d have different values, which means the two ips are in different vlans. But I can only bring up eth0 with this command: $ /etc/init.d/networking restart RTNETLINK answers: File exists Failed to bring up eth1. ...done. I have checked the question here which shows the same problem like the one I encountered: Can only bring up one of two interfaces But it seems it's not really solved. And in my situation, I need the 2 ips to use 2 different gateways. So how to fix this problem? Edit1, changed the example config ip from 192.168.0.0/16 subnet to another 'real' subnet. Edit2, the purpose of doing this is fairly simple. Because the ip range I previous in don't have more room for new servers, and I have to move to another ip range. So I want to make the public servers bind to 2 ips for the transition period. I only have really limited knowledge about routing and subnet. @BillThor @rackandboneman, would you please give me some keywords or links on how to setup route for 2 ips? and @Mike Pennington, how do you know I speak chinese?

    Read the article

  • RDP access to DirectAccess Server via DirectAccess

    - by crgnz
    I have just setup a Win2008R2 DirectAccess server (and also a Win2008R2 Active Directory server). From the Internet I can Remote Desktop login to the AD server, but I cannot RDP into the DirectAccess server. I can PING both servers and get an IPv6 response. (I can RDP to the DirectAccess server from the internal company network) DirectAccess is configured to allow full intranet access. I think I've hit a mental block, the answer will probably be obvious, but I just can't see it.

    Read the article

  • OpenBSD pf 'match in all scrub (no-df)' causes HTTPS to be unreachable on mobile network

    - by Frank ter V.
    First of all: excuse me for my poor usage of the English language. For several years I'm experiencing problems with the 'match in all scrub (no-df)' rule in pf. I can't find out what's happening here. I'll try to be clear and simple. The pf.conf has been extremely shortened for this forum posting. Here is my pf.conf: set skip on lo0 match in all scrub (no-df) block all block in quick from urpf-failed pass in on em0 proto tcp from any to 213.125.xxx.xxx port 80 synproxy state pass in on em0 proto tcp from any to 213.125.xxx.xxx port 443 synproxy state pass out on em0 from 213.125.xxx.xxx to any modulate state HTTP and HTTPS are working fine. Until the moment a customer in France (Wanadoo DSL) couldn't view HTTPS pages! I blamed his provider and did no investigation on that problem. But then... I bought an Android Samsung Galaxy SII (Vodafone) to monitor my servers. Hours after I walked out of the telephone store: no HTTPS-connections on my server! I thought my servers were down, drove back to the office very fast. But they were up. I discovered that disabling the rule match in all scrub (no-df) solves the problem. Android phone (Vodafone NL) and Wanadoo DSL FR are now OK on HTTPS. But now I don't have any scrubbing anymore. This is not what I want. Does anyone here understand what is going on? I don't. Enabling scrubbing causes HTTPS webpages not to be loaded on SOME ISP's, but not all. In systat, I strangely DO see a state created and packets received from those ISP's... Still confused. I'm using OpenBSD 5.1/amd64 and OpenBSD 5.0/i386. I have two ISP's at my office (one DSL and one cable). Affects both. This can be reproduced quite easily. I hope someone has experience with this problem. Greetings, Frank

    Read the article

  • Ubuntu 12.04 - Pound Reverse Proxy and Adobe Flex/Flash Auth

    - by James
    First time posting, I have a completely fresh install of ubuntu 12.04 Client as a reverse proxy gateway to our internal network. Our setup is we have one external ip but three domains we would like to point to various webservers on our internal network. It's not so much a load balancing issue or cacheing etc. Merely routing some Client browsers to a port 80 webpage (to adhere to some stricter corporate policies regarding placing port numbers after domain names). I have gone with pound and everything seems to be working fine. Static pages load etc. Everything is good with the exception of a Flash/Flex based WebClient for a Digital Asset Management program. The actual static page loads fine, it is just at the moment of entering credentials, be they correct or incorrect, and hitting login, there is no response whatsoever. Either a rejection or confirmation etc. So the request back to the internal server can't be getting through. I have googled extensively and there might be a solution in a crossdomain.xml file? Documentation isn't very clear. And we are not the authors of the DAM app, and have no control over the code on the Flash/Flex side. Questions: Is there a particular config file/solution for pound that allows Flash/Flex auth information to be forwarded? Is there another reverse proxy program (nginx?)that allows this type of config? Am I looking at this the entire wrong way, should Flash/Flex fundamentally not be allowed to have this access?

    Read the article

  • Windows Server 2003 DHCP not handing out IPs

    - by SnOrfus
    I'm trying to setup a home server (to tinker with) as a domain controller. I've setup the domain and I've installed DHCP and setup a scope without any exclusions (with the default range of 192.168.0.1-254). My client machine is a Windows 7 (RC) machine and it has a connection but can't get an IP address. Even if I try setting the IP to a static 192.168.0.2 and there is still no connectivity. I can ping it from the server, but pinging the server from the client just times out. The only thing between the server and the client is a 24 port switch (D-Link DES-1024D). edit Ok, it turned out that the interfaces were setup backwards in the NAT settings (the internal nic connection was set to public and the external nic connection was set to private). I changed this and all was OK.... sort-of. Problem is now: If I set a static ip on the client (where I am typing this from) all is fine. BUT; when I set it to get it from DHCP, I get a correct IP from the server (192.168.0.2) but there is no internet on the client; but I can still ping the server fine from the client (which makes sense cause I was able to get an IP from it). edit I ended up just removing the Routing and DHCP server roles and just going with ICS for the time being until I get my hands on some better learning tools.

    Read the article

  • Can I extract a specific folder using tar to another folder?

    - by PeanutsMonkey
    I am new to the world of Linux and seem to have run into a stumbling block. I know I can extract a specifc archive using the command tar xvfz archivename.tar.gz sampledir/ however how can I extract sampledir/ to testdir/ rather than the path that the archive is in e.g.currently the archive is in the path /tmp/archivename.tar.gz and I would like to extract sampledir to testdir which is in the path /tmp/testdir.

    Read the article

  • Windows Server 2003 DHCP not handing out IPs

    - by SnOrfus
    I'm trying to setup a home server (to tinker with) as a domain controller. I've setup the domain and I've installed DHCP and setup a scope without any exclusions (with the default range of 192.168.0.1-254). My client machine is a Windows 7 (RC) machine and it has a connection but can't get an IP address. Even if I try setting the IP to a static 192.168.0.2 and there is still no connectivity. I can ping it from the server, but pinging the server from the client just times out. The only thing between the server and the client is a 24 port switch (D-Link DES-1024D). edit Ok, it turned out that the interfaces were setup backwards in the NAT settings (the internal nic connection was set to public and the external nic connection was set to private). I changed this and all was OK.... sort-of. Problem is now: If I set a static ip on the client (where I am typing this from) all is fine. BUT; when I set it to get it from DHCP, I get a correct IP from the server (192.168.0.2) but there is no internet on the client; but I can still ping the server fine from the client (which makes sense cause I was able to get an IP from it). edit I ended up just removing the Routing and DHCP server roles and just going with ICS for the time being until I get my hands on some better learning tools.

    Read the article

  • How to tunnel all traffic through Tor?

    - by HappyDeveloper
    All I want is be able to use flash and javascript while using Tor (I don't intend to use it for torrents) Normally, using flash with Tor is not recommended because firefox plugins run outside of the sandbox, so the browser's proxy settings don't apply to them, and can reveal your real IP. But I think it should be possible to also redirect flash to the same socket as the browser, and block the other outgoing ports just in case. Any ideas on how to do this?

    Read the article

  • ZFS/Btrfs/LVM2-like storage with advanced features on Linux?

    - by Easter Sunshine
    I have 3 identical internal 7200 RPM SATA hard disk drives on a Linux machine. I'm looking for a storage set-up that will give me all of this: Different data sets (filesystems or subtrees) can have different RAID levels so I can choose performance, space overhead, and risk trade-offs differently for different data sets while having a few number of physical disks (very important data can be 3xRAID1, important data can be 3xRAID5, unimportant reproducible data can be 3xRAID0). If each data set has an explicit size or size limit, then the ability to grow and shrink the size limit (offline if need be) Avoid out-of-kernel modules R/W or read-only COW snapshots. If it's a block-level snapshots, the filesystem should be synced and quiesced during a snapshot. Ability to add physical disks and then grow/redistribute RAID1, RAID5, and RAID0 volumes to take advantage of the new spindle and make sure no spindle is hotter than the rest (e.g., in NetApp, growing a RAID-DP raid group by a few disks will not balance the I/O across them without an explicit redistribution) Not required but nice-to-haves: Transparent compression, per-file or subtree. Even better if, like NetApps, analyzes the data first for compressibility and only compresses compressible data Deduplication that doesn't have huge performance penalties or require obscene amounts of memory (NetApp does scheduled deduplication on weekends, which is good) Resistance to silent data corruption like ZFS (this is not required because I have never seen ZFS report any data corruption on these specific disks) Storage tiering, either automatic (based on caching rules) or user-defined rules (yes, I have all-identical disks now but this will let me add a read/write SSD cache in the future). If it's user-defined rules, these rules should have the ability to promote to SSD on a file level and not a block level. Space-efficient packing of small files I tried ZFS on Linux but the limitations were: Upgrading is additional work because the package is in an external repository and is tied to specific kernel versions; it is not integrated with the package manager Write IOPS does not scale with number of devices in a raidz vdev. Cannot add disks to raidz vdevs Cannot have select data on RAID0 to reduce overhead and improve performance without additional physical disks or giving ZFS a single partition of the disks ext4 on LVM2 looks like an option except I can't tell whether I can shrink, extend, and redistribute onto new spindles RAID-type logical volumes (of course, I can experiment with LVM on a bunch of files). As far as I can tell, it doesn't have any of the nice-to-haves so I was wondering if there is something better out there. I did look at LVM dangers and caveats but then again, no system is perfect.

    Read the article

  • apache2: ssl_error_rx_record_too_long when visiting port 80? help!

    - by John
    Hi, I have an Ubuntu 10 x64 server edition machine. I got a second IP and configured /etc/network/interfaces like so (actual IPs and gateways removed): [code] auto lo iface lo inet loopback iface eth0 inet dhcp auto eth0 auto eth0:0 iface eth0 inet static address [ my first IP ] netmask 255.255.255.0 gateway [ my first gateway ] iface eth0:0 inet static address [ my second IP ] netmask 255.255.255.0 gateway [ my second gateway ] [/code] /etc/apache2/ports.conf: [code] Listen 80 NameVirtualHost [ my first IP ]:80 NameVirtualHost [ my second IP ]:80 # If you add NameVirtualHost *:443 here, you will also have to change # the VirtualHost statement in /etc/apache2/sites-available/default-ssl # to # Server Name Indication for SSL named virtual hosts is currently not # supported by MSIE on Windows XP. Listen 443 NameVirtualHost [ my first IP - some site is running SSL successfully using it ]:443 Listen 443 [/code] /etc/apache2/sites-enabled/mysite.conf: [code] ServerName mysite.com Include /var/www/mysite.com/djangoproject/apache/django.conf [/conf] [/code] Then when visiting http[mysite].com:80 or http[mysite].com (:// removed because serverfault doesn't allow me to post hyperlinks), I get: [code] An error occurred during a connection to [mysite].com. SSL received a record that exceeded the maximum permissible length. (Error code: ssl_error_rx_record_too_long) [/code] My guess is that the configuration file is not being picked up, and apache is therefore looking for the default-ssl file, which is not in conf-enabled. If I were to configure that file properly, it seems I would successfully connect to whatever default directory is specified in the default-ssl file. But I want to connect to my website. Any ideas? Thanks in advance!

    Read the article

  • "ImportError: No module named flask" - Trouble with nginx + uWSGI + Flask in a virtualenv setup

    - by vjk2005
    I got nginx + uWSGI running on localhost inside a virtualenv with a simple hello world program, but I get this error when I replace the hello world with a simple Flask app: File "./wsgi_configuration_module.py", line 1, in <module> from flask import Flask ImportError: No module named flask unable to load app mountpoint Here's the flask app (wsgi_configuration_module.py): from flask import Flask application = Flask(__name__) @application.route("/") def hello(): return "hello world" if __name__ == "__main__": application.run() uWSGI config (app_conf.xml): <uwsgi> <socket>127.0.0.1:9001</socket> <chdir>/srv/www/labs/application</chdir> <pythonpath>/srv/www</pythonpath> <module>wsgi_configuration_module</module> <callable>application</callable> <no-site>true</no-site> </uwsgi> nginx config: server { listen 80; server_name localhost; access_log /srv/www/labs/logs/access.log; error_log /srv/www/labs/logs/error.log; location / { include uwsgi_params; uwsgi_pass 127.0.0.1:9001; } location /static { root /srv/www/labs/public_html/static/; index index.html index.htm; } } virtualenv stored in ~/virtual_env with Python 2.7 + nginx + uWSGI + Flask installed in a virtualenv called basic. Things I've tried to solve this: set the --home (-H) option to my virtualenv folder ~/virtual_env while running uWSGI. Other info: I have the same setup working outside of a virtualenv. Things go wrong only when I try to replicate the setup inside of a virtualenv. Where have I gone wrong?

    Read the article

  • Nginx $scheme doesn't always work while using SSL for one specific page

    - by jjiceman
    I read and followed this question in order to configure nginx to force SSL for one page (admin.php for XenForo), and it is working well for a few of the site administrators but is not for myself. I was wondering if anyone has any advice on how to improve this configuration: ... ssl_certificate example.net.crt; ssl_certificate_key example.key; server { listen 80 default; listen 443 ssl; server_name www.example.net example.net; access_log /srv/www/example.net/logs/access.log; error_log /srv/www/example.net/logs/error.log; root /srv/www/example.net/public_html; index index.php index.html; location / { if ( $scheme = https ){ rewrite ^ http://example.net$request_uri? permanent; } try_files $uri $uri/ /index.php?$uri&$args; index index.php index.html; } location ^~ /admin.php { if ( $scheme = http ) { rewrite ^ https://example.net$request_uri? permanent; } try_files $uri /index.php; include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param HTTPS on; } location ~ \.php$ { try_files $uri /index.php; include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param HTTPS off; } } ... It seems that the extra information in the location ^~ /admin.php block is unecessary, does anyone know of an easy way to avoid duplicate code? Without it it skips the php block and just returns the php files. Currently it applies https correctly in Firefox when I navigate to admin.php. In Chrome, it downloads the admin.php page. When returning to the non-https website in Firefox, it does not correctly return to http but stays as SSL. Like I said earlier, this only happens for me, the other admins can go back and forth without a problem. Is this an issue on my end that I can fix? And does anyone know of any ways I could reduce duplicate configuration options in the configuration? Thanks in advance!

    Read the article

  • extension shortcuts in chrome dont have text in them, windows 7

    - by bboyreason
    I have a few extensions next to my URL bar in chrome on windows 7. When I press any of them, or even the bookmark star, the text box that pops up is clear. I am using the following extensions: ablock plus, googleTranslate, firebugLite, ghostery, avastWebRep, Ultimate UserAgent switcher, Web developer toolbar, desprotetor de links, DNT plus, and WOT. I am set to block popups but not javascript in chrome and have a blackout theme installed. any ideas?

    Read the article

< Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >