Search Results

Search found 17771 results on 711 pages for 'dhcp option'.

Page 229/711 | < Previous Page | 225 226 227 228 229 230 231 232 233 234 235 236  | Next Page >

  • Internet in the router but not in the local network.

    - by TheMouse
    I have a PC and a laptop (Windows 7 - both) which should connect through router to Internet. The router is Linksys wrt120. My ISP is using PPPoE. I have connected the Internet cable to the router, clone the MAC of my PC, writing the username and the password for my Internet connection. After seconds the router has acquired IP from my ISP. I have used the administration panel of Linksys and with the help of the ping and tracert commands which are built into it - I can connect to the world, outside the network. The problem is when I try to connect the PC or the laptop to the network. There's no problem here. The DHCP server of the router gives them appropriate addresses. The problem is that they couldn't connect to either Internet addresses (google.com) or IP addresses. But they can connect to the router and its control panel. I tried several times, reset the router..but there's no Internet..still.

    Read the article

  • Setup windows 2012 AD in Hyper-V for a Test environment

    - by hub
    Im trying to setup a Windos 2012 R2 test environment on my work computer (a laptop). I have a AD, DHCP and DNS server on server A, and a client connecting to the doman and that works. The client can ping the AD server and gets a valid IP adress. If I ping google.com from the client I get the IP adress but I dont get any responses (request time out). If i ping google.com from server A it works as it should. Server A have a connection to the Internet through a "external network switch" in hyper-v, which gets its internet from a router and the client is connected to a "internal network switch". May the poblem be that server A is behind a router? Can I make this solution to work regadless the network my laptop is connected to? At home i have one IP adress, at work its a totally different range. What I would like is to use my laptops internet connection, regardless wifi or wired, to act as incomming internet, is this possible?

    Read the article

  • Parallel Tasking Concurrency with Dependencies on Python like GNU Make

    - by Brian Bruggeman
    I'm looking for a method or possibly a philosophical approach for how to do something like GNU Make within python. Currently, we utilize makefiles to execute processing because the makefiles are extremely good at parallel runs with changing single option: -j x. In addition, gnu make already has the dependency stacks built into it, so adding a secondary processor or the ability to process more threads just means updating that single option. I want that same power and flexibility in python, but I don't see it. As an example: all: dependency_a dependency_b dependency_c dependency_a: dependency_d stuff dependency_b: dependency_d stuff dependency_c: dependency_e stuff dependency_d: dependency_f stuff dependency_e: stuff dependency_f: stuff If we do a standard single thread operation (-j 1), the order of operation might be: dependency_f -> dependency_d -> dependency_a -> dependency_b -> dependency_e \ -> dependency_c For two threads (-j 2), we might see: 1: dependency_f -> dependency_d -> dependency_a -> dependency_b 2: dependency_e -> dependency_c Does anyone have any suggestions on either a package already built or an approach? I'm totally open, provided it's a pythonic solution/approach. Please and Thanks in advance!

    Read the article

  • Diffrernce between BackgroundWorker.ReportProgress() and Control.Invoke()

    - by ohadsc
    What is the difference between options 1 and 2 in the following? private void BGW_DoWork(object sender, DoWorkEventArgs e) { for (int i=1; i<=100; i++) { string txt = i.ToString(); if (Test_Check.Checked) //OPTION 1 Test_BackgroundWorker.ReportProgress(i, txt); else //OPTION 2 this.Invoke((Action<int, string>)UpdateGUI, new object[] {i, txt}); } } private void BGW_ProgressChanged(object sender, ProgressChangedEventArgs e) { UpdateGUI(e.ProgressPercentage, (string)e.UserState); } private void UpdateGUI(int percent, string txt) { Test_ProgressBar.Value = percent; Test_RichTextBox.AppendText(txt + Environment.NewLine); } Looking at reflector, the Control.Invoke() appears to use: this.FindMarshalingControl().MarshaledInvoke(this, method, args, 1); whereas BackgroundWorker.Invoke() appears to use: this.asyncOperation.Post(this.progressReporter, args); (I'm just guessing these are the relevant function calls.) If I understand correctly, BGW Posts to the WinForms window its progress report request, whereas Control.Invoke uses a CLR mechanism to invoke on the right thread. Am I close? And if so, what are the repercussions of using either ? Thanks

    Read the article

  • Hyper-V with multiple physical networks

    - by Yaman
    I have Hyper-V on my laptop with a wireless and Ethernet card, sometimes I connect using wireless card, and sometimes using the Ethernet cable. I am trying to configure Hyper-v to work always with internet provided to virtual machine. I have tried to create 2 virtual switches, one external to the Wireless network card, and the other one external to the Ethernet card. What happens is that the wireless network creates a bridge object in the network and sharing center\Network connections of windows 8, while the Ethernet does not. Unfortunately, they do not work together as external, i have to set the connected one to external and the other one as external, I also have to go to the properties of the bridge and virtual Ethernet properties to uncheck and check some components like: Client for Microsoft Networks Deterministic Network Enhancer VMWare Bridge Protocol QoS Packet Scheduler File and Printer Sharing for Microsoft Networks In order to make things work. Sometimes I keep the wireless network switch external, and go to another location (another wireless network), and it disconnects, i have to reconfigure the switches. Is there a way to do the configuration once and remain working wherever I connect, whether its Wireless or Ethernet and on any network with DHCP?

    Read the article

  • One dns server in different subnets

    - by hofmeister
    I have installed a small Linux server; the server is in a different subnet as the internet hosts. I added a route to my nat router to create a connection between both subnets. In both subnets I use an extra dhcp Server. Subnet A: 192.168.0.0/26 Subnet B: 192.168.1.0/26 Router: 192.168.0.1, Server in A: 192.168.0.62, Server in B: 192.168.1.62 internet ____ nat router ___ (Sub A)___ internet hosts | |____(Sub B)___ other hosts I could ping every host. Also the hosts which are connected to the subnet b, has internet connection. But sadly I have a problem with the dns server. I use the dnsServer from my nat router, I set the dns Server for subnet b to the ip 192.168.0.1, but every dns entries are equal with the hostname from my linux server. Example if the hostname from the server is test Test 192.168.0.62 //Server subnet a Test-2 192.168.1.62 //Server subnet b Test-2-2 192.168.1.1 //host a Test-2-2-2 192.168.1.2 //host b Any idea what went wrong? The internet dns resolution works fine.

    Read the article

  • Sending files using Winsock - optimal send() data length?

    - by Meta
    I am using Winsock with non-blocking sockets to send a file to a client. The way I'm doing it right now is that I read a chunk of 8192 bytes from the file, and then loop until all of it successfully goes through send() (obviously handling WSAEWOULDBLOCK as it occurs). I then move on and read the next 8192 bytes, and so on... Although I can use any other number than 8192 when I test the transfer on my local machine, once I try it over a network, it seems like 8191 is the largest number I can use. When I try to use any number higher than 8191 (starting with 8192), the file transfer becomes extremely slow (about 5 times slower). Is there any reason why 8191 is so special? I've done some more testing and it turns out that using 8000 is slightly faster (by 0.5%). If you understand why 8191 is so special, can you tell me if there is a number better than the others (better than 8000)? I have a feeling that it has something to do with the fact that the default send buffer allocated to the socket by Winsock is 8KB, but I don't understand why. It might also have something to do with the Nagle algorithm, but again, I'm not sure how. Note that I have not modified the SO_SNDBUF option nor the TCP_NODELAY option. Or am I doing this all wrong? What's the best way of sending a file over a non-blocking socket?

    Read the article

  • Instructions to setup primary and only domain controller

    - by Robert Koritnik
    Where could I get best step by step instructions (with some simple explanations) how to setup domain controller on Windows Server 2008 R2 Server Core? I don't know what do I need? Do I need DNS as well and AD and so on and so forth. I don't know enough about these things, but I need to set them up to prepare development environment. I would also like to know how to configure firewall on DC machine, to make it visible on other machines because I've setup DC somehow but I can't connect to it... This is my HW config: Linksys internet router with DHCP my dev machine is Windows 7 my DC machine is a VM in my dev machine my dev machine has a hw network adapter to linksys and a virtual network adapter to DC DC machine has two network adapters: one to linksys (to be internet connected so it can be updated etc.) and one to host (my dev Win7 machine) Edit My development machine should access domain controller and logon using domain credentials. Development machine would access internet directly via Linksys router. My domain controller machine would only serve authentication (and if I'm able to configure it right) should also have Active Directory Federation Services in a workable condition. I hope this is a bit more clear now. At least a small bit.

    Read the article

  • Vim - show diff on commit in mercurial;

    - by JackLeo
    In my .hgrc I can provide an editor or a command to launch an editor with options on commit. I want to write a method or alias that launches $ hg ci, it would not only open up message in Vim, but also would split window and there print out $ hg diff. I know that I can give parameters to vim by using +{command} option. So launching $ vim "+vsplit" does the split but any other options goes to first opened window. So I assume i need a specific function, yet I have no experience in writing my own Vim scripts. The script should: Open new vertical split with empty buffer (with vnew possibly) In empty buffer launch :.!hg diff Set empty buffer file type as diff :set ft=diff I've written such function: function! HgCiDiff() vnew :.!hg diff set ft=diff endfunction And in .hgrc I've added option: editor = vim "+HgCiDiff()" It kind of works, but I would like that splited window would be in right side (now it opens up in left) and mercurial message would be focused window. Also :wq could be setted as temporary shortcut to :wq<CR>:q! (having an assumption that mercurial message is is focused). Any suggestions to make this a bit more useful and less chunky? UPDATE: I found vim split guide so changing vnew with rightbelow vnew opens up diff on the right side.

    Read the article

  • very slow internet with Linksys WRT54GL only in wireless mode (wired is OK)

    - by gojira
    I bought a new Cisco Linksys WRT54GL router to connect my laptop (running Windows 7) to the internet. I installed Tomato 1.28 firmware on the router. When I connect the laptop to the router via ethernet cable, everything is fine and I get extremely fast up- and download speeds. When I connect wirelesssly however, websites load extremely slow - it takes dozens of seconds to load a website! <-- This is my question, how can I fix the wireless speed issue? Gmail for example is unusable this way. I tried speedtest.net, but this always fails in the upload part of the test so I can't even test the bandwidth (could the fact that it fails in the upload part, not the download part, be an indication what the problem is?!). I have isolated the problem a bit, I am convinced it has to do either with the router itself, the router settings, or the settings of the wireless connection in Win 7. Because previously, I was using another router by Buffalo and I had no problems whatsoever. I have tried to reproduce the settings from the Bufallo router as closely as possible on the Linksys router (same channel, same encryption etc). The download speed problem only occurs with the Linksys router, and only in wireless mode! When I exchange the Linksys router with the Buffalo router I have here for testing, the wireless speed is up to normal again. Also, before I had installed the Tomato firmware I had exactly the same problem, so it has nothing to do with the firmware itself. Notes & things I already tried: Changing the channel: does not seem to affect anything, I am also on the same channel (10) which I was previously on when I had a Buffalo router. QoS is off. Ping to the router itself is OK, ~ 1 ms. Some current settings of the linksys router: WAN / Internet Type: DHCP Wirelesss Mode: Access Point B/G Mode: Mixed Broadcast: check Channel: 10 - 2.457 GHz Security: WPA2 Personal Encryption: AES

    Read the article

  • Loading default value in a dropdown and calling onchange event

    - by J. Davidson
    Hi i have following dropdown <div class="fcolumn"> <label class="text" for="o_Id">Months:</label> <select class="textMonths" id="o_Id" name="periodName" > <option value="000">Select Period--</option> </select> </div> In the following jquery, first it loads fnLoadP() in a drop down list. Than as a default I am loading one of the values in drop down which is '10'. It loads too as default value. But it should be executing $("#o_Id").change.. which it doesn't. $(document).ready(function () { var sProfileUserId = null; $("#o_Id").change(function () { //---- }); fnLoadP(); $("select[name='pName']").val('10'); }); }); Basically my goal is. After dropdown values are loaded, to load '10' as default value and call onchange event in the dom. Please let me know how to fix it.

    Read the article

  • Problem with linking in gcc

    - by chitra
    I am compiling a program in which a header file is defined in multiple places. Contents of each of the header file is different, though the variable names are the same internal members within the structures are different . Now at the linking time it is picking up from a library file which belongs to a different header not the one which is used during compilation. Due to this I get an error at link time. Since there are so many libraries with the same name I don't know which library is being picked up. I have lot of oems and other customized libraries which are part of this build. I checked out the options in gcc which talks about selecting different library files to be included. But no where I am able to see an option which talks about which libraries are being picked up the linker. If the linker is able to find more than one library file name, then which does the linker pick up is something which I am not able to understand. I don't want to specify any path, rather I want to understand how the linker is resolving the multiple libraries that it is able to locate. I tried putting -v option, but that doesn't list out the path from which the gcc picks up the library. I am using gcc on linux. Any help in this regard is highly appreciated. Regards, Chitra

    Read the article

  • XP box with 3 NICs on Server 2003 Domain

    - by Hannibal
    I have assigned all three NICs IP addresses that are outside the DHCP pool. I have 2 NICs are connected to 1 switch and the 3rd to my second switch. I want to assign one of the two NICs on the 1st switch to "normal" network activity (e.g. internet access, RDP, etc.) The other two NICs I want to reserve for mirroring ports on their respective switches. While this machine is connected to the domain I can access the internet and Remote Desktop. I have no idea which NIC I am using until I start mirroring a port, at which time, if I happen to be connected through one of the NICs I have dedicated to mirroring, I lose my remote desktop. I am aware that I would have more control over the NICs using Linux. I want to explore Windows solutions before I go that route because reinstalling another OS would be inconvenient (but not impossible). I would likely use a version of Backtrack (3 or 4 not sure). I would also have to learn how to access the machine remotely but I've done it before so this would be a minor obstacle. Thank you for your assistance.

    Read the article

  • Does anyone know how to "tcpdump" traffic decrypted by Mallory MITM? [migrated]

    - by chriv
    I'm looking for some help in capturing network traffic that I can analyze in Wireshare (or other tools). The tool I'm using is mallory. If anyone is familiar with mallory, I could use some help. I've got it configured and running correctly, but I don't know how to get the output that I want. The setup is on my private network. I have a VM (running Ubuntu 12.04 - precise) with two NICs: eth0 is on my "real" network eth1 is only on my "fake" network, and is using dnsmasq (for DNS and DHCP for other devices on the "fake" network) Effectively eth0 is the "WAN" on my VM, and eth1 is the "LAN" on my VM. I've setup mallory and iptables to intercept, decrypt, encrypt and rewrite all traffic coming in on destination port 443 on eth1. On the device I want intercepted, I have imported the ca.cer that mallory generated as a trusted root certificate. I need to analyze some strange behavior in the HTTPS stream between the client and server, so that's why mallory is setup in between for this MITM. I would like to take the decrypted HTTPS traffic and dump it to either a logfile or a socket in a format compatible with tcpdump/wireshark (so I can collect it later and analyze it). Running tcpdump on eth1 is too soon (it's encrypted), and running tcpdump on eth2 is too late (it's been re-encrypted). Is there a way to make mallory "tcpdump" the decrypted traffic (in both directions)?

    Read the article

  • Possible Solution for Setting up a Linux VPN Server to Encrypt WLAN Traffic of Macs and iPhones on

    - by GorillaPatch
    I would like to set up a VPN server on debian linux to encrypt wireless traffic coming from my Mac or iOS device. I would like to use a certificate-based solution. Setting up a PKI infrastructure and managing certificates is OK for me. 1. Which server to pick? By looking through the internet and here on stackoverflow I found the following possible solutions: strongSwan IPSec and racoon Which solution is feasible for a linode running debian squeeze? 2. How to configure the network? If I understood correctly a VPN has a virtual network interface as an endpoint on the server side. Naively I would think that I need a DHCP server running on the server to assign a dynamic private IP (like of the class C network 192.168.xxx.xxx) to the connecting clients. Next I think I would need to set up masquerading to NAT the incoming VPN traffic to the real interface directly connected to the internet. Is this the right way to go? Do you have any configuration examples? I often saw VPN configurations used to connect to your home network, but that is not what I am looking for. I have a server up in the internet and want to use it as a proxy to encrypt traffic in insecure network environments like public WLANs.

    Read the article

  • CORBA on MacOS X (Cocoa)

    - by user8472
    I am currently looking into different ways to support distributed model objects (i.e., a computational model that runs on several different computers) in a project that initially focuses on MacOS X (using Cocoa). As far as I know there is the possibility to use the class cluster around NSProxy. But there also seem to be implementations of CORBA around with Objective-C support. At a later time there may be the need to also support/include Windows machines. In that case I would need to use something like Gnustep on the Windows side (which may be an option, if it works well) or come up with a combination of both technologies. Or write something manually (which is, of course, the least desirable option). My questions are: If you have experience with both technologies (Cocoa native infrastructure vs. CORBA) can you point out some key features/issues of either approach? Is it possible to use Gnustep with Cocoa in the way explained above? Is it possible (and reasonably feasible, i.e. simpler than writing a network layer manually) to communicate among all MacOS clients using Cocoa's technology and with Windows clients through CORBA?

    Read the article

  • Asp.Net MVC - Binding of parameter to model value!

    - by Pino
    This seems like the model binding is causing me issues. Essentially I have a model called ProductOption and for the purpose of this question it has 2 fields ID (Int) PK ProductID (Int) FK I have a standard route set-up context.MapRoute( "Product_default", "Product/{controller}/{action}/{id}", new { controller = "Product", action = "Index", id = UrlParameter.Optional } ); and if the user wants to add an option the URL is, /Product/Options/Add/1 in the above URL 1 is the ProductID, I have the following code to return a blank model the the view, [HttpGet] public ActionResult Add(int id) { return View("Manage", new ProductOptionModel() { ProductID = id }); } Now in my view I keep a hidden field <%= Html.HiddenFor(x=>x.ID) %> This is used to determine (on submit) if we are editing or adding a new option. However the Model binder in .net seems to replace .ID (Which was 0 when leaving the above get actionresult) with 1 (or the value of the id parameter in the URL) How can I stop or work around this? ViewModel public class ProductExtraModel { //Database public int ID { get; set; } public string Name { get; set; } public int ProductID { get; set; } public ProductModel Product { get; set; } }

    Read the article

  • What is the corrrect way to increment a field making up part of a composit key

    - by Tr1stan
    I have a bunch of tables whose primary key is made up of the foreign keys of other tables (Composite key). Therefore for example the attributes (as a very cut down version) might look like this: A[aPK, SomeFields] 1:M B[bPK, aFK, SomeFields] 1:M C[cPK, bFK, aFK, SomeFields] as data this could look like: A[aPK, SomeFields]: 1, Foo 2, Bar B[bPK, aFK, SomeFields]: 1, 1, FooData1 2, 1, FooData2 1, 2, BarData1 2, 2, BarData2 C[cPK, bFK, aFK, SomeFields]: 1, 1, 1, FooData1More 2, 1, 1, FooData1More 1, 2, 1, FooData2More 2, 2, 1, FooData2More 1, 1, 2, BarData1More 2, 1, 2, BarData1More 1, 2, 2, BarData2More 2, 2, 2, BarData2More I've got this running in a MSSQL DBMS and I'm looking for the best way to increment the left most column, in each table when a new tuple is added to it. I can't use the Auto Increment Identity Specification option as that has no idea that it is part of a composite key. I also don't want to use any aggregate function such as: MAX(field)+1 as this will have adverse affects with multiple users inputting data, rolling back etc. There might however be a nice trigger based option here, but I'm not sure. This must be a common issue so I'm hoping that someone has a lovely solution. As a side which may or may not affect the answer, I'm using Entity Framework 1.0 as my ORM, within a c# MVC application.

    Read the article

  • jquery function

    - by kevin
    Hello id like to get the current element firing this event inside the event, Ideally I need the id, selected value and class, Thanks in advance $(document).ready(function () { $("#positions select").live("change", function () { var id = $("#category_id").val(); //var id = this.val(); alert(id); $.getJSON("/Category/GetSubCategories/" + id, function (data) { if (data.length > 0) { alert(data.length); var position = document.getElementById('positions'); var tr = position.insertRow(7); var td1 = tr.insertCell(-1); var td = tr.insertCell(-1); var sel = document.createElement("select"); sel.name = 'sel'; sel.id = 'sel'; sel.setAttribute('class', 'category'); // $("positions select").live("change", function () { //}); td.appendChild(sel); $.each(data, function (GetSubCatergories, category) { $('#sel').append($("<option></option>"). attr("value", category.category_id). text(category.name)); }); } }); }); });

    Read the article

  • DDD and MVC Models hold ID of separate entity or the entity itself?

    - by Dr. Zim
    If you have an Order that references a customer, does the model include the ID of the customer or a copy of the customer object like a value object (thinking DDD)? I would like to do ths: public class Order { public int ID {get;set;} public Customer customer {get;set;} ... } right now I do this: public class Order { public int ID {get;set;} public int customerID {get;set;} ... } It would be more convenient to include the complete customer object rather than an ID to the View Model passed to the form. Otherwise, I need to figure out how to get the vendor information to the view that the order references by ID. This also implies that the repository understand how to deal with the customer object that it finds within the order object when they call save (if we select the first option). If we select the second option, we will need to know where in the view model to put it. It is certain they will select an existing customer. However, it is also certain they may want to change the information in-place on the display form. One could argue to have the controller extract the customer object, submit customer changes separately to the repository, then submit changes to the order, keeping the customerID in the order.

    Read the article

  • Two part question about submitting bluetooth-enabled apps for the iPhone

    - by Kyle
    I have a couple questions about submitting blue-tooth enabled apps on the iPhone. I want to first say that bluetooth is merely an option in the application. The application does not completely rely on bluetooth as there are many modes the user can go in. First, do they require you to have the "peer-peer" key set in UIRequiredDeviceCapabilities even if bluetooth interface options can be disabled or hidden for non-bluetooth enabled devices? Basically, it's just an OPTION in the game and there are many other modes the player can play.. Does Apple not allow you to do that? I'm just curious, because it seems like something they would do. Adding to that, how do you check for it's functionality at runtime? In essence, how do you check UIRequiredDeviceCapabilities at runtime. I'm aware of checking iPhone device types, so would that be a proper way of going about it? I'm also sort of unaware which devices can run bluetooth gamekit, there doesn't seem to be a proper reference at the SDK site, or I'm unable to find it. Thanks for reading! [edit] I can confirm the existance of somebody rejected for submitting a bluetooth enabled app which didn't work on a iPhone 2G.. Of course, they didn't say if that was the MAIN function of the app, though.

    Read the article

  • connecting to secure database from website host

    - by jim
    Hello all, I've got a requirement to both read and write data via a .net webservice to a sqlserver database that's on a private network. this database is currently accessed via a vpn connection by remote client software (on standard desktop machines) to get latest product prices and to upload product stock sales. I've been tasked with finding a way to centralise this access from a webservice that the clients then access, rather than them using the vpn route to connect directly to the database. My question is related to my .net service's relationship to the sqlserver database. What are the options for connecting to a private network vpn from a domain host in order to achive the functionality of allowing the webservice to both read and write data to the database. For now, I'm not too concerned about the client connectivity and security (tho i appreciate that this will have to be worked out too), I'm really just interested in discovering the options available in order to allow my .net webservice to connect to the private network in as painless and transparent a way as posible. The option of switching the database onto public hosting is not an option, so I have to work with the sdcenario as described above for now, unless there's a compelling rationale presented to do otherwise. thanks all... jim

    Read the article

  • OpenVPN bridge network from routed clients

    - by gphilip
    I have the following setup: subnet 1 - 10.0.1.0/24 with a machine used as NAT and also running an OpenVPN client subnet 2 - 192.168.1/24 with an OpenVPN server (the server in subnet 1 connect here) subnet 3 - 10.0.2.0/24 that uses the NAT machine (subnet 1) to access the internet, so all non-local traffic is routed there to the eth0 interface The OpenVPN client creates the tun0 interface and appropriate routing so that I can access machines from 192.168.1/24 [root@ip-10-0-1-208 ~]# telnet 192.168.1.186 8081 Trying 192.168.1.186... Connected to 192.168.1.186. Escape character is '^]'. [root@ip-10-0-1-208 ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 10.0.1.1 0.0.0.0 UG 0 0 0 eth0 10.0.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 10.8.0.1 10.8.0.5 255.255.255.255 UGH 0 0 0 tun0 10.8.0.5 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 169.254.169.254 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 192.168.0.0 10.8.0.5 255.255.0.0 UG 0 0 0 tun0 However, when I try the same from subnet 3, it can't reach that machine. [root@ip-10-0-2-61 ~]# telnet 192.168.1.186 8081 Trying 192.168.1.186... I suspect that it's because subnet 3 is routed to eth0 on the NAT machine in subnet 1 and it cannot jump to tun0. What's the easiest way to resolve it? I don't want to use iptables. I can't change the routing from machines in subnet 1 because it's done in AWS and so it works only with specific interfaces. Also, the NAT machine gets its IP with DHCP and so bridging is a bit complicated. IP forwarding is set on the NAT machine [root@ip-10-0-1-208 ~]# cat /proc/sys/net/ipv4/ip_forward 1 Thank you!

    Read the article

  • Abstract factory pattern on top of IoC?

    - by Sergei
    I have decided to use IoC principles on a bigger project. However, i would like to get something straight that's been bothering me for a long time. The conclusion that i have come up with is that an IoC container is an architectural pattern, not a design pattern. In other words, no class should be aware of its presence and the container itself should be used at the application layer to stitch up all components. Essentially, it becomes an option, on top of a well designed object-oriented model. Having said that, how is it possible to access resolved types without sprinkling IoC containers all over the place (regardless of whether they are abstracted or not)? The only option i see here is to utilize abstract factories which use an IoC container to resolve concrete types. This should be easy enough to swap out for a set of standard factories. Is this a good approach? Has anyone on here used it and how well did it work for you? Is there anything else available? Thanks!

    Read the article

  • jQuery: bind generated elements

    - by superUntitled
    Hello, thank you for taking time to look at this. I am trying to code my very first jQuery plugin and have run into my first problem. The plugin is called like this <div id="kneel"></div> <script type="text/javascript"> $("#kneel").zod(1, { }); </script> It takes the first option (integer), and returns html content that is dynamically generated by php. The content that is generated needs to be bound by a variety of functions (such as disabling form buttons and click events that return ajax data. The plugin looks like this (I have included the whole plugin in case that matters)... (function( $ ){ $.fn.zod = function(id, options ) { var settings = { 'next_button_text': 'Next', 'submit_button_text': 'Submit' }; return this.each(function() { if ( options ) { $.extend( settings, options ); } // variables var obj = $(this); /* these functions contain html elements that are generated by the get() function below */ // disable some buttons $('div.bario').children('.button').attr('disabled', true); // once an option is selected, enable the button $('input[type="radio"]').live('click', function(e) { $('div.bario').children('.button').attr('disabled', false); }) // when a button is clicked, return some data $('.button').bind('click', function(e) { e.preventDefault(); $.getJSON('/returnSomeData.php, function(data) { $('.text').html('<p>Hello: ' + data + '</p>'); }); // generate content, this is the content that needs binding... $.get('http://example.com/script.php?id='+id, function(data) { $(obj).html(data); }); }); }; })( jQuery ); The problem I am having is that the functions created to run on generated content are not binding to the generated content. How do I bind the content created by the get() function?

    Read the article

< Previous Page | 225 226 227 228 229 230 231 232 233 234 235 236  | Next Page >