Search Results

Search found 88103 results on 3525 pages for 'bam data control multiple'.

Page 353/3525 | < Previous Page | 349 350 351 352 353 354 355 356 357 358 359 360  | Next Page >

  • Make flash drive bootable with MULTIPLE Windows Installers

    - by alexander7567
    I have seen many ways to make a USB an installer for about any Windows OS. But how can I (using grub or something like that) make it bootable to all editions on Windows XP and 7? I have tried Googling it and researching, I've even tried to do it, But I dont understand much about GRUB or linux at all. Please keep in mind that I am not very good with linux, so please use as many details as possible.

    Read the article

  • Multiple subnets on isc-dhcp-server using ddns with bind9

    - by legioxi
    On my network I have two subnets: 10.100.1.0/24 - Wired/wireless 10.100.7.0/24 - VPN Both subnets are served by isc-dhcp-server running on a Debian VM. This same VM runs bind9 for my DNS. ISC-DHCP-SERVER is configured to use DDNS and update BIND9 with hosts/IPs. Everything runs great until a device drops off the wired/wireless network and pops onto the VPN. When connecting on the VPN, a DHCP lease is handed out on the new subnet but DDNS does not update BIND9. Since the device has A/TXT/PTR records it appears ISC-DHCP-SERVER won't switch them to the new IP. The logs show: Connect to wireless: Nov 6 20:55:13 core-server named[2417]: client 127.0.0.1#57697: updating zone 'internal.mydomain.com/IN': adding an RR at 'demo-iphone.internal.mydomain.com' A Nov 6 20:55:13 core-server named[2417]: client 127.0.0.1#57697: updating zone 'internal.mydomain.com/IN': adding an RR at 'demo-iphone.internal.mydomain.com' TXT Nov 6 20:55:13 core-server dhcpd: DHCPACK on 10.100.1.160 to FF:FF:FF:FF:FF:FF (demo-iphone) via eth0 Nov 6 20:55:13 core-server dhcpd: Added new forward map from demo-iphone.internal.mydomain.com to 10.100.1.160 Nov 6 20:55:13 core-server dhcpd: Added reverse map from 160.49.21.172.in-addr.arpa. to demo-iphone.internal.mydomain.com Switch to VPN: Nov 6 20:56:34 core-server dhcpd: DHCPOFFER on 10.100.7.101 to BB:BB:BB:BB:BB:BB (demo-iphone) via 10.100.7.0 Nov 6 20:56:34 core-server named[2417]: client 127.0.0.1#57697: updating zone 'internal.mydomain.com/IN': update unsuccessful: demo-iphone.internal.mydomain.com: 'name not in use' prerequisite not satisfied (YXDOMAIN) Nov 6 20:56:34 core-server dhcpd: DHCPREQUEST for 10.100.7.101 (10.100.1.2) from BB:BB:BB:BB:BB:BB (demo-iphone) via 10.100.7.0 Nov 6 20:56:34 core-server dhcpd: DHCPACK on 10.100.7.101 to BB:BB:BB:BB:BB:BB (demo-iphone) via 10.100.7.0 Nov 6 20:56:34 core-server named[2417]: client 127.0.0.1#57697: updating zone 'internal.mydomain.com/IN': update unsuccessful: demo-iphone.internal.mydomain.com/TXT: 'RRset exists (value dependent)' prerequisite not satisfied (NXRRSET) Nov 6 20:56:34 core-server dhcpd: Forward map from demo-iphone.internal.mydomain.com to 10.100.7.101 FAILED: Has an address record but no DHCID, not mine. One thing to note is that the MAC of the device when connecting via VPN is the MAC of my Cisco ASA5512X and not the actual device. The ASA is relaying the DHCP request from the VPN client to the VM running ISC-DHCP-SERVER. Is there a way to get DDNS working in this scenario?

    Read the article

  • Apache httpd.conf handle multiple domains to run the same application

    - by John Stewart
    So what we are looking for is the ability to do the following: We have an application that can load certain settings based on the domain that it is being accessed from. So if you come from xyz.com we show a different logo and if you come from abc.com we show a different logo. The code is the same, running from same server just detects the domain on the run Now we want to get a dedicated server (any suggestions?) that will enable us to point all the doamins that we want to this server (we change the DNS for the domains to that of our server) and then when the user goes to a certain domain they run the same application. Now as far as I can understand we will need to create a "VirtualHost" in apache to handle this. Can we create a wildcard virtualhost that catches all the domains? I am not an expert with Apache at all. So please forgive if this comes out to be a silly question. Any detailed help would be great. Thanks

    Read the article

  • Git version control with multiple users

    - by ignatius
    Hello, i am a little bit lost with this issue, let me explain you my problem: I want to setup a git repository, three of four users will contribute, so they need to download the code and shall be able to upload their changes to the server or update their branch with the latest modifications. So, i setup a linux machine, install git, setup the repository, then add the users in order to enable the acces throught ssh. Now my question is, What's next?, the git documentation is a little bit confusing, i.e. when i try from a dummy user account to clone the repository i got: xxx@xxx-desktop:~/Documentos/git/test$ git clone -v ssh://[email protected]/pub.git Initialized empty Git repository in /home/xxx/Documentos/git/test/pub/.git/ [email protected]'s password: fatal: '/pub.git' does not appear to be a git repository fatal: The remote end hung up unexpectedly is that a problem of privileges? need any special configuration? i want to avoid using git-daemon or gitosis, sorry, maybe my question sound silly but git is powerfull but i admit not so user friendly. Thanks Br

    Read the article

  • Multiple IPv6 addresses being assigned to single interface

    - by Steven Copestake
    I'm configuring a DHCP scope for IPv6, however all my clients are getting 2 or more IPv6 addresses assigned to the interface. I'm using fda8:6c3:ce53:a890:/64 as the prefix. The scope is on a machine running Windows Server 2008 R2 however I'm looking to roll it out on Server 2012 as well. My clients are getting the following: IPv6 Address. . . . . . . . . . . : 1024::1492:9288:7357:7d30(Preferred) IPv6 Address. . . . . . . . . . . : fda8:6c3:ce53:a890::1(Preferred) IPv6 Address. . . . . . . . . . . : fda8:6c3:ce53:a890:1492:9288:7357:7d30(Pr rred) Link-local IPv6 Address . . . . . : fe80::1492:9288:7357:7d30%10(Preferred) In this example there's no DHCP scope and the machine has a static IP of fda8:6c3:ce53:a890::1. With this in mind where are the 1024 address and the fda8:6c3:ce53:a890:1492:9288:7357:7d30 address coming from?

    Read the article

  • Add KO "data-bind" attribute on $(document).ready

    - by M.Babcock
    Preface I've rarely ever been a JS developer and this is my first attempt at doing something with Knockout.js. The question to follow likely illustrates both points. Backgound I have a fairly complex MVC3 application that I'm trying to get to work with KO (v2.0.0.0). My MVC app is designed to generically control which fields appear in the view (and how they are added to the view). It makes use of partial views to decide what to draw in the view based on the user's permissions (If the user is in group A then show control A, if the user in group B then show control B or possibly if the user is in group A don't include the control at all). Also, my model is very flat so I'm not sure the built-in ability to apply my ViewModel to a specific portion of the view will help. My solution to this problem is to provide an action in my controller that responds with an object in JSON format with that contains the JQuery selector and the content to assign to the "data-bind" attribute and bind the ViewModel to the View in the $(document).ready event using the values provided. Failed Proof-of-concept My first attempt at proving that this works doesn't actually seem to work, and by "doesn't work" I mean it just doesn't bind the values at all (as can be seen in this jsfiddle). I've tried it with the applyBindings inside of the ready event and not, but it doesn't seem to make any bit of difference. Question What am I doing wrong? Or is this just not something that can work with KO (though I've seen at least one example online doing the same thing and it supposedly works)? Like I said in the preface, I've only ever pretended to be a JS developer (though I've generally gotten it to work in the past) so I'm at a loss where to start trying to figure out what I'm doing wrong. Hopefully this isn't a real noob question.

    Read the article

  • Domino nchronos.exe multiple instances causing server to die, and Sametime problems

    - by Kevin
    I've had this problem for a few months now. I thought it started when I installed the Traveller software on the server to add ActiveSync support, but I removed that and the problem still persists. Basically new instances of "nchronos.exe" keeps spawning (and not ending), so over a period of a few days the server eventually gets drowned in nchronos.exe processes, stops responding and I need to kill Domino. My process count the last time was up at about 330, and when I killed it and restarted the Domino my process count went to 160. I'm running Domino 8.5.1 with Fix Pack 2. I don't know if it's relevant, but my Domino server was also acting as a Sametime server. At around the same time that nchronos started playing up sametime also stopped working. None of my users can connect to sametime and in the domino log it keeps telling me "stpolicy.exe" has terminated. I've googled for that and tried a few things, but nothing seems to make sametime work again. Any thoughts?? Cheers, Kevin

    Read the article

  • Rails/Mongo across multiple different geo-regions

    - by wmarbut
    I have a system that by necessity requires physical presence in three or more different locations and I need advice on structuring in such a way that my database stays replicated in a timely manner without horrible latency. I've seen mysql access and replication be incredibly slow when the application server was trying to talk to a node that wasn't physically collocated. In this case I am using mongodb. The stack is linux/passenger/ruby/rails/mongodb. The database is write heavy and read light. The infrastructure is Amazon EC2 The application layer must be physically located in 3 or more different locations. I can't justify this requirement further than it is a requirement. The database, however needn't be located in more than one location if it can be written to quickly from other locations. From reading mongo's documentation, mongo replication seems like more of a candidate than sharding b/c my datastore is not huge. However I don't see anything that addresses the issue of speed for servers communicating across large distances with potentially high latency.

    Read the article

  • Sharepoint: Multiple Alternate Access Mapping Collections for Single Web Application

    - by Russ Giddings
    Hi All, We have a SharePoint MOSS 2007 installation which has two different external hostnames. When inspecting the setup I've noticed that there are two Alternate Access Mapping Collections mapped to the same web application. Each AAM collection contains one url mapped to the default zone. I can't see how AAM collections are mapped to web apps or even how to create a new AAM collection. I've always thought that there was just a one to one mapping between web apps and AAM collections. Does anyone have any idea as to how you would create such a situation? Cheers Russell

    Read the article

  • What would multiple serial ports be used for?

    - by jasondavis
    I saw someone was using something like the card in the link below on there system for some networking gear on their pc. I am very curious what a person would need 8 serial ports for. What kind of stuff uses this? http://www.newegg.com/Product/Product.aspx?Item=N82E16815124041&cm_re=serial_card--15-124-041--Product

    Read the article

  • where on disk is space allocated for new files inside LVM lv with ext4 file system?

    - by Jost
    I run a multi-disk server with LVM2. Several large disks serve as LVM2 physical volumes for one volume group, containing one logical volume formatted with ext4. Nothing fancy, just your standard linear setup. Recently an additional, very small disk was added as physical volume to that volume group and I expanded both the logical volume, and the ext4 file system therein onto that disk. This lv is used to store incremental backups using rsync and is only about 30% full, there have rarely been any files deleted from it, only incremental writes. Now this new HDD I added to the pre-existing volume group has unexpectedly died on me, and the volume group won't come up because it is missing one physical volume. As fate will have it, this WAS the "in an event of catastrophic failure on the primary server"-backup, the event happened, the boss is not happy, so this kinda has to work... According to this (Part 3): http://www.novell.com/coolsolutions/appnote/19386.html it is possible to trick LVM into starting anyway by creating a new pv with identical metadata to the failed disk, which will make the volume accessible, but of course leave giant holes in the file system. I have'n tried it yet, because it involves repairing (writing to) the file system which eliminates the possibility of trying other things if it fails. Now my question is: How does this setup actually allocate disk space for new data? Is it allocated linearly from beginning to end of PVs, in the order they were added to the vg? Is it striped somehow in order to increase performance/balance load? since this defective disk was added only later to an existing lvm2 vg and lv, containing a half-empty ext4, what are the chances that there was never any data written to the defective disk? In other words: what are the chances of recovering all my data, even without the defective disk, by just starting the volume group as-is? Am I about to go spend $1500 on having 250GB of empty space recovered when I send the defective disk in for repair? Is there a way to check without mounting the file system and opening the files, hoping they contain something other than zeros? (comparing addresses of used data blocks inside ext4 to address ranges that were on the missing pv, something like that, preferably easy to automate) I know bitwise-copying the entire lv into an image file before trying to repair the ext4 would probably be a good idea, but since this lv is very large and I just suffered major file system failure on several systems it is probably a luxury I don't have... Any suggestions?

    Read the article

  • How to control admission policy in vmware HA?

    - by John
    Simple question, I have 3 hosts running 4.1 Essentials Plus with vmware HA. I tried to create several virtual machines that filled 90% of each server's memory capacity. I know that vmware has really sophisticated memory management within virtual machines, but I do not understand how the vCenter can allow me even to power on the virtual machines that exceed the critical memory level, when the host failover can be still handled. Is it due to the fact that virtual machines does not use the memory, so that it is still considered as free, so virtual machines can be powered on ? But what would happen if all VMs would be really using the RAM before the host failure - they could not be migrated to other hosts after the failure. The default behaviour in XenServer is that, it automatically calculates the maximum memory level that can be used within the cluster so that the host failure is still protected. Vmware does the same thing ? The admission policy is enabled. Vmware HA enabled.

    Read the article

  • Setting up HTTPS across multiple servers

    - by JohnyD
    I'm looking to offer our online services over https and I'm having a couple of problems understanding how to accomplish this. To access our services you must pass through our ISA firewall to a Win2000 server running IIS6. About half our services are located here and the other half take you to a Win2003 server also running IIS6. So, in order to achieve this must each server have the proper certificate installed? ISA, IIS6_1 and IIS6_2? Is there a separate configuration that must be made to our ISA firewall? The other problem is with the CA and knowing how many certificates I need. It's important to note that the domain name for our services on IIS6_1 is www.domainname.com but the domain name on IIS6_2 is services.domainname.com. I believe that this will require me to purchase more than one certificate. It looks as though we will be going with Thawte's SSL123 as it's a good name and it's fast to get. Will I need to purchase 2 certificates (one for www that will be installed on our ISA firewall as well as IIS6_1, and one for services.domainname.com on IIS6_2)? Or will I need to purchase 3, the extra one being used on our firewall server? Another side question is about SAN's (subject alternative names). Is this basically adding sub-domains to your cert? So I could purchase one cert with 1 SAN for my www and services.? Thanks a lot for your help! Please let me know if I can provide any further information.

    Read the article

  • jQuery Flow Control (if then from URL params)

    - by Ryan Max
    Strangely enough I am more familiar with jQuery than I am with javascript. I need to be able to add a class to the body tag of a document depending on what specific forum page i'm on in a phpbb forum. Due to the nature of phpbb I can't actually do this flow control in php, so I am using jquery. Here's my code (the first part is an extend that gets the url parameters like so http://www.mysite.com/viewforum.php?f=3 var forum = $.getUrlVar('f'); will make forum == 3 because of the nature of phpbb i can't really do any flow control with php. So I am using jquery. This is my code: $(document).ready(function(){ $.extend({ getUrlVars: function(){ var vars = [], hash; var hashes = window.location.href.slice(window.location.href.indexOf('?') + 1).split('&'); for(var i = 0; i < hashes.length; i++) { hash = hashes[i].split('='); vars.push(hash[0]); vars[hash[0]] = hash[1]; } return vars; }, getUrlVar: function(name){ return $.getUrlVars()[name]; } }); }); $(document).ready(function(){ var forum = $.getUrlVar('f'); if (forum == 3){ $('body').toggleClass('black'); } }); Yet this isn't working. Any idea why not?

    Read the article

  • correct routing for multiple devices

    - by helmi
    I have Debian Lenny machine with 3 interfaces enabled (eth0-2), and I have problems as follow. eth1 is connected to a router and this router has portforwarding for port80. eth2 is connected direct to the internet If I open a website hosted on my system via the router it works fine. If I try to open the same via the eth2 connetion it does not! tshark shows incomming trafic on eth2 but nothing goes out there. iptabes accepts all My routing table: Ziel Router Genmask Flags Metric Ref Use Iface 10.9.0.2 * 255.255.255.255 UH 0 0 0 tun0 212.236.24.128 * 255.255.255.224 U 0 0 0 eth2 192.168.1.0 * 255.255.255.0 U 0 0 0 eth1 192.168.0.0 * 255.255.255.0 U 0 0 0 eth0 10.9.0.0 10.9.0.2 255.255.255.0 UG 0 0 0 tun0 default 192.168.1.1 0.0.0.0 UG 0 0 0 eth1 default 212.236.024.129 0.0.0.0 UG 0 0 0 eth2 default 192.168.0.1 0.0.0.0 UG 0 0 0 eth0

    Read the article

  • Oracle rownum in db2 - Java data archiving

    - by HonorGod
    I have a data archiving process in java that moves data between db2 and sybase. FYI - This is not done through any import/export process because there are several conditions on each table that are available on run-time and so this process is developed in java. Right now I have single DatabaseReader and DatabaseWriter defined for each source and destination combination so that data is moved in multiple threads. I guess I wanted to expand this further where I can have Multiple DatabaseReaders and Multiple DatabaseWriters defined for each source and destination combination. So, for example if the source data is about 100 rows and I defined 10 readers and 10 writer, each reader will read 10 rows and give them to the writer. I hope process will give me extreme performance depending on the resources available on the server [CPU, Memory etc]. But I guess the problem is these source tables do not have primary keys and it is extremely difficult to grab rows in multiple sets. Oracle provides rownum concept and i guess the life is much simpler there....but how about db2? How can I achieve this behavior with db2? Is there a way to say fetch first 10 records and then fetch next 10 records and so on? Any suggestions / ideas ? Db2 Version - DB2 v8.1.0.144 Fix Pack Num - 16 Linux

    Read the article

  • I need access control within the same network/VLAN

    - by Sadiq ali
    Hi, I have a single network/VLAN and I want to block some traffic and allow some traffic in my network, is this possible using a L2 or L3 switch? If so which switches support this feature and what would be the commands to configure this? I have already tried this using access lists by applying it to an ethernet port but if I apply it on one port it will automatically work on incoming traffic on that port but I mean it to work on only outgoing traffic as per my ACL. Do you have any suggestions please?

    Read the article

  • multiple ip for a server not reachable

    - by andrewk
    FYI: I've read everything on Serverfault related to this question and have faced a different issue. Simply put, I've got one server (apache2) with couple of sites on it. It currently has 1 ip. I'm trying to assign/add another ip to that server, so I can give each site a different ip for ssl purposes. I am not lucking out. The new ip simply is unreachable, I've pinged it. This is what I've got below, what am I doing wrong. auto lo iface lo inet loopback auto eth0 eth0:0 eth0:1 iface eth0 inet static address 70.116.5.244 netmask 255.255.255.0 gateway 70.116.5.1 #THE NEW IP iface eth0:0 inet static address 26.175.217.102 netmask 255.255.255.0 #PRIVATE IP iface eth0:1 inet static address 192.168.158.88 netmask 255.255.128.0 NOTE: THESE IP'S ARE TWEAKED BUT RELATIVE I've read many questions here 90% similar to this but most actually have the IP respond, not this case. Thanks netstar -r output Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface default gw-u6.linode.co 0.0.0.0 UG 0 0 0 eth0 70.116.5.0 * 255.255.255.0 U 0 0 0 eth0 26.175.217.0 * 255.255.255.0 U 0 0 0 eth0 192.168.128.0 * 255.255.128.0 U 0 0 0 eth0

    Read the article

  • Vagrant (Virtualbox) host-only multiple node networking issue

    - by Lorin Hochstein
    I'm trying to use a multi-VM vagrant environment as a testbed for deploying OpenStack, and I've run into a networking problem with trying to communicate from one VM, to a VM-inside-of-a-VM. I have two Vagrant nodes, a cloud controller node and a compute node. I'm using host-only networking. My Vagrantfile looks like this: Vagrant::Config.run do |config| config.vm.box = "precise64" config.vm.define :controller do |controller_config| controller_config.vm.network :hostonly, "192.168.206.130" # eth1 controller_config.vm.network :hostonly, "192.168.100.130" # eth2 controller_config.vm.host_name = "controller" end config.vm.define :compute1 do |compute1_config| compute1_config.vm.network :hostonly, "192.168.206.131" # eth1 compute1_config.vm.network :hostonly, "192.168.100.131" # eth2 compute1_config.vm.host_name = "compute1" compute1_config.vm.customize ["modifyvm", :id, "--memory", 1024] end end When I try to start up a (QEMU-based) VM, it boots successfully on compute1, and its virtual nic (vnet0) is connected via a bridge, br100: root@compute1:~# brctl show 100 bridge name bridge id STP enabled interfaces br100 8000.08002798c6ef no eth2 vnet0 When the QEMU VM makes a request to the DHCP server (dnsmasq) running on controller, I can see the request reaches the controller because of the output on the syslog on the controller: Aug 6 02:34:56 precise64 dnsmasq-dhcp[12042]: DHCPDISCOVER(br100) fa:16:3e:07:98:11 Aug 6 02:34:56 precise64 dnsmasq-dhcp[12042]: DHCPOFFER(br100) 192.168.100.2 fa:16:3e:07:98:11 However, the DHCPOFFER never makes it back to the VM running on compute1. If I watch the requests using tcpdump on the vboxnet3 interface on my host machine that runs Vagrant (Mac OS X), I can see both the requests and the replies $ sudo tcpdump -i vboxnet3 -n port 67 or port 68 tcpdump: WARNING: vboxnet3: That device doesn't support promiscuous mode (BIOCPROMISC: Operation not supported on socket) tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on vboxnet3, link-type EN10MB (Ethernet), capture size 65535 bytes 22:51:20.694040 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 22:51:20.694057 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 22:51:20.696047 IP 192.168.100.1.67 > 192.168.100.2.68: BOOTP/DHCP, Reply, length 311 22:51:23.700845 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 22:51:23.700876 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 22:51:23.701591 IP 192.168.100.1.67 > 192.168.100.2.68: BOOTP/DHCP, Reply, length 311 22:51:26.705978 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 22:51:26.705995 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 22:51:26.706527 IP 192.168.100.1.67 > 192.168.100.2.68: BOOTP/DHCP, Reply, length 311 But, if I tcpdump on eth2 on compute, I only see the requests, not the replies: root@compute1:~# tcpdump -i eth2 -n port 67 or port 68 tcpdump: WARNING: eth2: no IPv4 address assigned tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth2, link-type EN10MB (Ethernet), capture size 65535 bytes 02:51:20.240672 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 02:51:23.249758 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 02:51:26.258281 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 At this point, I'm stuck. I'm not sure why the DHCP replies aren't making it to the compute node. Perhaps it has something to do with the configuration of the VirtualBox virtual switch/router? Note that eth2 interfaces on both nodes have been set to promiscuous mode.

    Read the article

  • OpenVZ multiple networks on CTs

    - by picca
    I have Hardware Node (HN) which has 2 physical interfaces (eth0, eth1). I'm playing with OpenVZ and want to let my containers (CTs) have access to both of those interfaces. I'm using basic configuration - venet. CTs are fine to access eth0 (public interface). But I can't get CTs to get access to eth1 (private network). I tried: # on HN vzctl set 101 --ipadd 192.168.1.101 --save vzctl enter 101 ping 192.168.1.2 # no response here ifconfig # on CT returns lo (127.0.0.1), venet0 (127.0.0.1), venet0:0 (95.168.xxx.xxx), venet0:1 (192.168.1.101) I believe that the main problem is that all packets flows through eth0 on HN (figured out using tcpdump). So the problem might be in routes on HN. Or is my logic here all wrong? I just need access to both interfaces (networks) on HN from CTs. Nothing complicated.

    Read the article

  • How to restore a dd overwritten disk partition?

    - by DairyKnight
    First of all, I admit I'm stupid and I didn't run proper backup of my data, but you know crap happens... So, I've used dd to overwrite the first 2GB of my 750GB NTFS partition with a FAT32 partition. I've run Photorec and EasyRecovery but all I can restore is the 2GB FAT32 partition and the files on that. Is there a way to "roll back" to the NTFS paritition, and recover - at least - some part of the 750GB data? Thanks.

    Read the article

  • pound: multiple domains

    - by niklassaers
    Hi guys, I've been using pound to run mydomain.dk. Now I've bought some other domains and SSL certificates that are mydomain.no, mydomain.se and mydomain.eu. My old config looked roughly like this: ListenHTTPS Address 81.19.246.120 Port 443 Cert "/usr/local/etc/pound.keys/mydomain.dk.pem" Service BackEnd Address 10.0.10.10 Port 8080 End End End At places like here I've seen that I can use HeadRequire in the Service part, but I want the Host header to go together with the Cert, ideally something like ListenHTTPS Address 81.19.246.120 Port 443 HostAndCert "mydomain.dk" "/usr/local/etc/pound.keys/mydomain.dk.pem" HostAndCert "mydomain.se" "/usr/local/etc/pound.keys/mydomain.se.pem" HostAndCert "mydomain.no" "/usr/local/etc/pound.keys/mydomain.no.pem" HostAndCert "mydomain.eu" "/usr/local/etc/pound.keys/mydomain.eu.pem" Service BackEnd Address 10.0.10.10 Port 8080 End End End Any suggestions or clues to how I can accomplish this? Cheers Nik

    Read the article

  • Google Chrome not loading web pages correctly unless multiple refreshes

    - by Brandon Wilson
    Webpages in Google Chrome do not load correctly from time to time. I can't reproduce it, it just happens. Some times it happens when I load the browser other times it happens when I am just browsing. Just now I went to five different web sites which 3 out of 5 of them did not load correctly. I have attached a photo of how Super User loaded the first time I loaded it. If I refreshed it it will load correctly. Facebook is bad like this. Some times Facebook will load correctly but some of there back end scripting may not load so the page may not refresh automatically. Not sure what is going on. I have tried other browsers (Firefox and Internet Explorer) and they seem to be working correctly. Chrome seems to be acting up only on this computer. All my computers are running Windows 8 and I have removed Chrome completely off this computer and re-installed. I even disabled all extensions and cleared all the caches. I even tried running Chrome without being logged in. Not sure what else to do at this point. An example of superuser.com not loading correctly: When I refresh the problem will go away until it happens again. Sometimes it takes two or three refreshes in order for it to correctly load.

    Read the article

< Previous Page | 349 350 351 352 353 354 355 356 357 358 359 360  | Next Page >