Search Results

Search found 5245 results on 210 pages for 'william hand'.

Page 200/210 | < Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >

  • ublas::bounded_vector<> being resized?

    - by n2liquid
    Now, seriously... I'll refrain from using bad words here because we're talking about the Boost fellows. It MUST be my mistake to see things this way, but I can't understand why, so I'll ask it here; maybe someone can enlighten me in this matter. Here it goes: uBLAS has this nice class template called bounded_vector<> that's used to create fixed-size vectors (or so I thought). From the Effective uBLAS wiki (http://www.crystalclearsoftware.com/cgi-bin/boost_wiki/wiki.pl?Effective_UBLAS): The default uBLAS vector and matrix types are of variable size. Many linear algebra problems involve vectors with fixed size. 2 and 3 elements are common in geometry! Fixed size storage (akin to C arrays) can be implemented efficiently as it does not involve the overheads (heap management) associated with dynamic storage. uBLAS implements fixed sizes by changing the underling storage of a vector/matrix to a "bounded_array" from the default "unbounded_array". Alright, this bounded_vector<> thing is used to free you from specifying the underlying storage of the vector to a bounded_array<> of the specified size. Here I ask you: doesn't it look like this bounded vector thing has fixed size to you? Well, it doesn't have. At first I felt betrayed by the wiki, but then I reconsidered the meaning of "bounded" and I think I can let it pass. But in case you, like me (I'm still uncertain), is still wondering if this makes sense, what I found out is that the bounded_vector<> actually can be resized, it may only not be greater than the size specified as template parameter. So, first off, do you think they've had a good reason not to make a real fixed<< size vector or matrix type? Do you think it's okay to "sell" this bounded -- as opposed to fixed-size -- vector to the users of my library as a "fixed-size" vector replacement, even named "Vector3" or "Vector2", like the Effective uBLAS wiki did? Do you think I should somehow implement a vector with fixed size for this purpose? If so, how? (Sorry, but I'm really new to uBLAS; just tried it today) I am developing a 3D game. Should uBLAS be used for the calculations involved in this ("hey, geometry!", per Effective uBLAS wiki)? What replacement would you suggest, if not? -- edit And just in case, yes, I've read this warning: It should be noted that this only changes the storage uBLAS uses for the vector3. uBLAS will still use all the same algorithm (which assume a variable size) to manipulate the vector3. In practice this seems to have no negative impact on speed. The above runs just as quickly as a hand crafted vector3 which does not use uBLAS. The only negative impact is that the vector3 always store a "size" member which in this case is redundant [or isn't it? I mean......]. I see it uses the same algorithm, assuming a variable size, but if an operation were to actually change its size, shouldn't it be stopped (assertion)? ublas::bounded_vector<float,3> v3; ublas::bounded_vector<float,2> v2; v3 = v2; std::cout << v3.size() << '\n'; // prints 2 Oh, come on, isn't this just plain betrayal?

    Read the article

  • My server app works strangely. What could be the reason(s)?

    - by Poni
    Hi! I've written a server app (two parts actually; proxy server and a game server) using C++ (board game). It uses IOCP as the sockets interface. For that app I've also written a "client simulator" (hereafter "client") app that spawns many client connections, where each of them plays, in very high speed, getting the CPU to be 100% utilized. So, that's how it goes in terms of topology: Game server - holds the game state. Real players do not connect it directly but through the proxy server. When a player joins a game, the proxy actually asks for it on behalf of that player, and the game server spawns a "player instance" for that player, and from now on, every notification between the game server and the player is being passed through the proxy. Proxy server - holds TCP connections with the real players. Players communicate with the game server through it only. Client simulator - connects to the proxy only. When running the server (again, it's actually two server apps) & client locally it all works just fine. I'm talking about 40k+ player instances in which all of them are active in a game. On the other hand, when running the server remotely with, say, 1000 clients who play things getting strange. For example, I run it as said above. Then with Task Manager I kill the client simulator app ("End Process Tree"). Then it seems like the buffer of the remote server got modified by another thread, or in other words, a memory corruption has been occurred. The server crashes because it got an unknown message id (it's a custom protocol where each message has it's own unique number). To make things clear, here is how I run the apps: PC1 - game server and clients simulator (because the clients will connect the proxy). PC2 - proxy server. The strangest thing is this: Only the remote side gets "corrupted". Remote in terms that it's not the PC I use to code the app (VC++ 2008). Let's call the PC I use to code the apps "PC1". Now for example, if this time I ran the game server on PC1 (it means that proxy server on PC2 and clients simulator on PC1), then the proxy server crashes with an "unknown message id" error. Another variation is when I run the proxy server on PC1 (again, the dev machine), the game server and the clients simulator on PC2, then the game server on PC2 gets crashed. As for the IOCP config: The servers' internal connections use the default receive/send buffer sizes. Tried even with setting them to 1MB, but no luck. I have three PCs in total; 2 x Vista 64bit <<-- one of those is the dev machine. The other is connected through WiFi. 1 x WinXP 32bit They're all connected in a "full duplex" manner. What could be the reason? Tried about everything; Stack tracing, recording some actions (like read/write logging).. I want to stress that only the PC I'm not using to code the apps crashes (actually the server app "role" which is running on it - sometimes the game server and sometimes the proxy server). At first I thought that maybe the wireless PC has problems (it's wireless..) but: TCP has it's own mechanisms to make sure the packet is delivered properly. Also, a crash also happens when trying it with the two PCs that are physically connected (Vista vs. XP). Another option is that the Windows DLLs versions might have problems, but then again, one of the tests is Vista vs. Vista, and the other is Vista vs. XP. Any idea?

    Read the article

  • has_one and has_many associations: which side of the association is saved first

    - by SeeBees
    I have three simplified models: class Team < ActiveRecord::Base has_many :players has_one :coach end class Player < ActiveRecord::Base belongs_to :team validates_presence_of :team_id end class Coach < ActiveRecord::Base belongs_to :team validates_presence_of :team_id end I use the following code to test these models: t = Team.new team.coach = Coach.new team.save! team.save! returns true. But in another test: t = Team.new team.players << Player.new team.save! team.save! gives the following error: > ActiveRecord::RecordInvalid: > Validation failed: Players is invalid > from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/validations.rb:1090:in > `save_without_dirty!' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/dirty.rb:87:in `save_without_transactions!' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/transactions.rb:200:in > `save!' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connection_adapters/abstract/database_statements.rb:136:in > `transaction' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/transactions.rb:182:in > `transaction' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/transactions.rb:200:in > `save!' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/transactions.rb:208:in > `rollback_active_record_state!' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/transactions.rb:200:in > `save!' from (irb):14 I figured out when team.save! is called, it first calls player.save!. player needs to validate the presence of the id of the associated team. But at the time player.save! is called, team hasn't been saved yet, and therefore, team_id doesn't yet exist for player. This fails the player's validation, so the error occurs. But on the other hand, team is saved before coach.save!, otherwise the first example will get the same error as the second. So I've concluded that when a has_many bs, a.save! will save bs prior to a. When a has_one b, a.save! will save a prior to b. If I am right, why is this the case? It doesn't seem logical to me. Why has_one and has_many association have different order in saving? Any ideas? Thanks.

    Read the article

  • Can't ping other machines at Linux VPN PPTP server's local lan from outside

    - by Marco Sanchez
    Before anything else, hello guys, this is the first time I ask for something here so I hope someone can give me a hand, please look at the following network diagram: --------------------------------------------------------------- VPN Server Webserver (SuSE SLES11) | | | ------- VPN LAN -------- | Router with Unique IP (With Port Forwarding rules set and VPN through enabled) | PPTP connection over Internet | Workstation (PC or Laptop with Windows) --------------------------------------------------------------- So the idea is for the workstation to connect to the PPTP Server and then be able to access a Web Application on the Webserver, right now I have the PPTP server configured and the VPN works, I can connect to the SLES11 server with no problems from the workstation and I can ping it and everything works fine but if I try to ping the Webserver from the workstation, I can't reach it, I'm making a mistake somewhere but I don't see where, please note that I'm not a network expert and thus I'd greatly appreciate some specific guidance. Here is some info related to the IPs --------------------------------------------------------------- *** SLES11 VPN Server has 2 Network cards: -- eth0 (Internal Network) IP: 192.168.210.5 MASK: 255.55.255.0 -- eth1 (External Network) IP: 192.168.1.105 MASK: 255.55.255.0 *** Webserver has 1 network card -- eth0 (Internal Network) IP: 192.168.210.221 MASK: 255.55.255.0 *** Workstation -- IP info once connection has been established to the VPN PPP adapter Test VPN Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Test VPN Connection Physical Address. . . . . . . . . : DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes IPv4 Address. . . . . . . . . . . : 192.168.210.110(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.255 Default Gateway . . . . . . . . . : 0.0.0.0 DNS Servers . . . . . . . . . . . : 189.209.208.181 (Defined as part of the PPTP Server options config script) 189.209.127.244 Primary WINS Server . . . . . . . : 192.168.210.220 (Defined as part of the PPTP Server options config script) NetBIOS over Tcpip. . . . . . . . : Enabled --------------------------------------------------------------- I also defined the following within IP tables: ------------------------------------------------------------- iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE iptables -A INPUT -i eth0 -p tcp --dport 1723 -j ACCEPT iptables -A INPUT -i eth0 -p gre -j ACCEPT ------------------------------------------------------------- If you need any piece of information from the PPTP server scripts please let me know, the thing is that I can actually connect to the VPN server and access its services and everything but after that I can't reach any other computer on that LAN. Any help would be greatly appreciated and thanks in advance

    Read the article

  • Link Error : xxx is already defined in *****.LIB :: What exactly is wrong?

    - by claws
    Problem: I'm trying to use a library named DCMTK which used some other external libraries ( zlib, libtiff, libpng, libxml2, libiconv ). I've downloaded these external libraries (*.LIB & *.h files ) from the same website. Now, when I compile the DCMTK library I'm getting link errors (793 errors) like this: Error 2 error LNK2005: __encode_pointer already defined in MSVCRTD.lib(MSVCR90D.dll) LIBCMTD.lib dcmmkdir Error 3 error LNK2005: __decode_pointer already defined in MSVCRTD.lib(MSVCR90D.dll) LIBCMTD.lib dcmmkdir Error 4 error LNK2005: __CrtSetCheckCount already defined in MSVCRTD.lib(MSVCR90D.dll) LIBCMTD.lib dcmmkdir Error 5 error LNK2005: __invoke_watson already defined in MSVCRTD.lib(MSVCR90D.dll) LIBCMTD.lib dcmmkdir Error 6 error LNK2005: __errno already defined in MSVCRTD.lib(MSVCR90D.dll) LIBCMTD.lib dcmmkdir Error 7 error LNK2005: __configthreadlocale already defined in MSVCRTD.lib(MSVCR90D.dll) LIBCMTD.lib dcmmkdir Error 8 error LNK2005: _exit already defined in MSVCRTD.lib(MSVCR90D.dll) LIBCMTD.lib dcmmkdir Documentation: This seems to be a popular error for this library so, they do have a FAQ entry addressing this issue which ( http://forum.dcmtk.org/viewtopic.php?t=35 ) says: The problem is that the linker tries to combine different, incompatible versions of the Visual C++ runtime library into a single binary. This happens when not all parts of your project and the libraries you link against are generated with the same code generation options in Visual C++. Do not use the /NODEFAULTLIB workaround, because strange software crashes may follow. Fix the problem! DCMTK is by default compiled with the "Multithreaded" or "Multithreaded Debug" code generation option (the latter for Debug mode). Either change the project settings of all of your code to use these code generation options, or change the code generation for all DCMTK modules and re-compile. MFC users beware: DCMTK should be compiled with "Multithreaded DLL" or "Multithreaded DLL Debug" settings if you want to link the libraries into an MFC application. Solution to same problem for others: http://stackoverflow.com/questions/2259697/vscopengl-huge-amount-of-linker-issues-with-release-build-only says: It seems that your release build is trying to link to something that was built debug. You probably have a broken dependency in your build, (or you missed rebuilding something to release by hand if your project is normally built in pieces). More technically, you seem to be linking projects built with different C Run Time library settings, one with "Multi-Threaded", another one with "Multi-Threaded Debug". Adjust the settings for all the projects to use the very same flavour of the library and the issue should go away Questions: Till now I used to think that Name mangling is the only problem that may cause linking failures if its not been standardized. Just now I knew there are other things also which can cause same effect. Whats up with the "Debug Mode" (Multi-Threaded Debug) and "Release Mode" (Multi-Threaded)? What exactly is happening under the hood? Why exactly this thing is causing linking error? I wonder if there is something called "Single-Threaded Debug" and "Single-Threaded" which again causes the same thing. Documentation talks something about "Code Generation Options". What Code Generation Options? WTH are they? Documentation specifically warns us not to use /NODEFAULTLIB workaround. (example /NODEFAULTLIB :msvcrt ). Why? How would I cause troubles? what exactly is it? Please explain the last point in the documentation for MFC users. Because I'm going to use MFC later in this project. Explain Why should we do it? What troubles would it cause if I don't. Anything more you'd like to mention? I mean regarding similar errors. I'm very interested in Linker & its problems. So, if there are any similar things you can mentions them or some keywords atleast.

    Read the article

  • What can I do to improve a project if there is a no-listening situation. Developers vs Management

    - by NazGul
    Hi all, I hope that I'm not the only one and I can get a answer from someone with more experience than me, so I can think cleaner and I don't get depressed with this developer's life. I'm working as developer for a small company three years now. In that three years I'm working in the same project and sincerely, I think this project could be used as a CASE STUDY because it has all the situations that cannot happen in a project and that makes a project fails. To begin with, and I believe you've already noticed, the project has 3 years already (develoment only) and is still unfinished, because in every meeting there is a "new priority" ,or a "new problem" to be solve or a "new feature" to be add. So, first problem is no target set. How can you know when something is finished if you don't know what you want? I understand Management, because they see an oportunity and try to get that, but I don't understand how can they not see (or hear us) that they'll lose all they already have and what they'll eventually get. Second, there is no team group. My team consists of three people, a Senior Developer, a DBA and, finally, I for all the work (support, testing, new features, bug fixing, meeting, projet management of clients, etc) aka Junior Developer. The first (senior developer), does not perform any tests on his changes, so, most of the time, his changes give us problems (us = me, since I'm the one who will fix it). The second (DBA) is an uncompromising person and you can not talk to him, believe me, I tried! In his view, everything he does is fantastic... even if it is the most complicated to make it... And he does everything he wants, even if we need that only for 5 months later and would help some extra-hand to do the things we have to do for now. As you can see, there is very hard to work with no help... Third, there is no testings. Every... I repeat, Every release of the project, the customers wants to kill us, because there is a lot of bugs. Management? They say that they want tests before the release. Us? We say the same. Time? No time. Management? There is always some time to open the application and click in some buttons. Us? Try to explain that it is not so simple. Management doesn't care... end of story. Actually, must of the bugs could be avoid with a rigorous work... Some people just want to do the show to the Management. "Did you ask for this? Cool, it's done. Bugs? The Do-all-the-work guy will solve." Unfortunally for me, sometimes the Do-all-the-work also has to finish it. And to makes this all better, I'm the person who will listen the complaints from the customers. Cool, huh? I know, everyone makes mistakes. But there is mistakes and mistakes... To complete, in the Management view, "the problem is the lack of an individual project management", because we cannot do all the stuff they ask, even if there is no PM for the project itself. And ask us to work overtime without any reward... I do say all this stuff to the management and others members, but by telling this, the I'm the bad guy, the guy who is complain when everything is going well... but we need to work overtime... sigh What can I do to make it works? Anyone has a situation like this, what did you do? I hope you could understand my problem, my English is a little rusty. Thanks.

    Read the article

  • How can I tell the size of my app during development?

    - by Newbyman
    My programming decissions are directly related to how much room I have left, or worse perhaps how much I need to shave off in order to get up the 10mb limit. I have read that Apple has quietly increased the 3G & Edge download limit from 10mb up to 20mb in preparation for the iPad in April. Either way, my real question is how can I gauge a rough estimate of how large my app will end while I'm still in the development phase? Is the file size of my development folder roughly 1 to 1 ratio? Is the compressed file size of my development a better approximation? My .xcodeproj file is only a couple hundred kB, but the size of my folder is 11.8 MB. I have a .sqlite database, less than 20 small png images and a Settings.Bundle. The rest are unknown Xcode files related to build, build for iphoneOS, simulator etc.... My source code is rather large with around 1000 lines in most of the major controllers, all in all around 48 .h&.m files. But my classes folder inside my development folder is less than 800kb. Digging around inside my Build file, there is lots of iphone simulator files and debugging files which I don't think will contribute to the final product. The Application file states that it is around 2.3 MB. However, this is such a large difference from the 11.8 MB, I have to wonder if this is just another piece of the equation. I have the app on the my device, I'm in the testing phase. Therefore, I though that I would try to see how large the working version was on the device by checking in iTunes, however my development app is visible on the right-hand the application's iphone screen, but no information about the app most importantly its size. I also checked in Organizer, I used the lower portion of the screen-(Applications), found my application and selected the drop down arrow which gave my "Application Data" and a download arrow button to the right to save a file on my desktop, named with the unique AppleID. Inside the folder it had three folders-(documents, library, tmp) the documents had a copy of my .sqlite database, the library a few more files but not anything obvious or of size, and the tmp was empty. All in all the entire folder was only 164kb-which tells me that this is not the right place to find the size either. I understand that the size is considered to be the size of my binary plus all the additional files and images that I have add. Does anyone have a effective way of guaging how large the binary is or the relating the development folder size to what the final App Store application size will end up. I know that questions have been posted with similar aspects, but I could not find any answered post that really described...what files, or how to determine size specifically. I know that this question looks like a book, but I just wanted to be specific in conveying exactly what I'm looking for and the attempts thus far. *Note all files are unzipped and still in regular working Xcode order of a single app with no brought-in builds or referenced projects. I'm sure that this is straight forward, I just don't know where to look?

    Read the article

  • RackSpace Cloud Strips $_SESSION if URL Has Certain File Extensions

    - by macinjosh
    The Situation I am creating a video training site for a client on the RackSpace Cloud using the traditional LAMP stack (RackSpace's cloud has both Windows and LAMP stacks). The videos and other media files I'm serving on this site need to be protected as my client charges money for access to them. There is no DRM or funny business like that, essentially we store the files outside of the web root and use PHP to authenticate user's before they are able to access the files by using mod_rewrite to run the request through PHP. So let's say the user requests a file at this URL: http://www.example.com/uploads/preview_image/29.jpg I am using mod_rewrite to rewrite that url to: http://www.example.com/files.php?path=%2Fuploads%2Fpreview_image%2F29.jpg Here is a simplified version of the files.php script: <?php // Setups the environment and sets $logged_in // This part requires $_SESSION require_once('../../includes/user_config.php'); if (!$logged_in) { // Redirect non-authenticated users header('Location: login.php'); } // This user is authenticated, continue $content_type = "image/jpeg"; // getAbsolutePathForRequestedResource() takes // a Query Parameter called path and uses DB // lookups and some string manipulation to get // an absolute path. This part doesn't have // any bearing on the problem at hand $file_path = getAbsolutePathForRequestedResource($_GET['path']); // At this point $file_path looks something like // this: "/path/to/a/place/outside/the/webroot" if (file_exists($file_path) && !is_dir($file_path)) { header("Content-Type: $content_type"); header('Content-Length: ' . filesize($file_path)); echo file_get_contents($file_path); } else { header('HTTP/1.0 404 Not Found'); header('Status: 404 Not Found'); echo '404 Not Found'; } exit(); ?> The Problem Let me start by saying this works perfectly for me. On local test machines it works like a charm. However once deployed to the cloud it stops working. After some debugging it turns out that if a request to the cloud has certain file extensions like .JPG, .PNG, or .SWF (i.e. extensions of typically static media files.) the request is routed to a cache system called Varnish. The end result of this routing is that by the time this whole process makes it to my PHP script the session is not present. If I change the extension in the URL to .PHP or if I even add a query parameter Varnish is bypassed and the PHP script can get the session. No problem right? I'll just add a meaningless query parameter to my requests! Here is the rub: The media files I am serving through this system are being requested through compiled SWF files that I have zero control over. They are generated by third-party software and I have no hope of adding or changing the URLs that they request. Are there any other options I have on this? Update: I should note that I have verified this behavior with RackSpace support and they have said there is nothing they can do about it.

    Read the article

  • evaluating cost/benefits of using extension methods in C# => 3.0

    - by BillW
    Hi, In what circumstances (usage scenarios) would you choose to write an extension rather than sub-classing an object ? < full disclosure : I am not an MS employee; I do not know Mitsu Furota personally; I do know the author of the open-source Componax library mentioned here, but I have no business dealings with him whatsoever; I am not creating, or planning to create any commercial product using extensions : in sum : this post is from pure intellectal curiousity related to my trying to (continually) become aware of "best practices" I find the idea of extension methods "cool," and obviously you can do "far-out" things with them as in the many examples you can in Mitsu Furota's (MS) blog postslink text. A personal friend wrote the open-source Componax librarylink text, and there's some remarkable facilities in there; but he is in complete command of his small company with total control over code guidelines, and every line of code "passes through his hands." While this is speculation on my part : I think/guess other issues might come into play in a medium-to-large software team situation re use of Extensions. Looking at MS's guidelines at link text, you find : In general, you will probably be calling extension methods far more often than implementing your own. ... In general, we recommend that you implement extension methods sparingly and only when you have to. Whenever possible, client code that must extend an existing type should do so by creating a new type derived from the existing type. For more information, see Inheritance (C# Programming Guide). ... When the compiler encounters a method invocation, it first looks for a match in the type's instance methods. If no match is found, it will search for any extension methods that are defined for the type, and bind to the first extension method that it finds. And at Ms's link text : Extension methods present no specific security vulnerabilities. They can never be used to impersonate existing methods on a type, because all name collisions are resolved in favor of the instance or static method defined by the type itself. Extension methods cannot access any private data in the extended class. Factors that seem obvious to me would include : I assume you would not write an extension unless you expected it be used very generally and very frequently. On the other hand : couldn't you say the same thing about sub-classing ? Knowing we can compile them into a seperate dll, and add the compiled dll, and reference it, and then use the extensions : is "cool," but does that "balance out" the cost inherent in the compiler first having to check to see if instance methods are defined as described above. Or the cost, in case of a "name clash," of using the Static invocation methods to make sure your extension is invoked rather than the instance definition ? How frequent use of Extensions would affect run-time performance or memory use : I have no idea. So, I'd appreciate your thoughts, or knowing about how/when you do, or don't do, use Extensions, compared to sub-classing. thanks, Bill

    Read the article

  • Using inheritance and polymorphism to solve a common game problem

    - by Barry Brown
    I have two classes; let's call them Ogre and Wizard. (All fields are public to make the example easier to type in.) public class Ogre { int weight; int height; int axeLength; } public class Wizard { int age; int IQ; int height; } In each class I can create a method called, say, battle() that will determine who will win if an Ogre meets and Ogre or a Wizard meets a Wizard. Here's an example. If an Ogre meets an Ogre, the heavier one wins. But if the weight is the same, the one with the longer axe wins. public Ogre battle(Ogre o) { if (this.height > o.height) return this; else if (this.height < o.height) return o; else if (this.axeLength > o.axeLength) return this; else if (this.axeLength < o.axeLength) return o; else return this; // default case } We can make a similar method for Wizards. But what if a Wizard meets an Ogre? We could of course make a method for that, comparing, say, just the heights. public Wizard battle(Ogre o) { if (this.height > o.height) return this; else if (this.height < o.height) return o; else return this; } And we'd make a similar one for Ogres that meet Wizard. But things get out of hand if we have to add more character types to the program. This is where I get stuck. One obvious solution is to create a Character class with the common traits. Ogre and Wizard inherit from the Character and extend it to include the other traits that define each one. public class Character { int height; public Character battle(Character c) { if (this.height > c.height) return this; else if (this.height < c.height) return c; else return this; } } Is there a better way to organize the classes? I've looked at the strategy pattern and the mediator pattern, but I'm not sure how either of them (if any) could help here. My goal is to reach some kind of common battle method, so that if an Ogre meets an Ogre it uses the Ogre-vs-Ogre battle, but if an Ogre meets a Wizard, it uses a more generic one. Further, what if the characters that meet share no common traits? How can we decide who wins a battle?

    Read the article

  • jQuery tabs error when used with jTemplates

    - by tessa
    I am using jTemplates to format data returned from a json query. I am trying to transform the div with the id "fundingDialogTabs" into jQuery tabs after the template is processed. It renders the tab buttons, but both fragment1 and fragment2 divs are showing at the same time. I get the error "jQuery UI Tabs: Mismatching fragment identifier" when clicking on the fragment2 tab. I tested the jQuery tabs code outside of the template and it works fine. Here is the template (saved in .tpl file). {#template MAIN} <div style="width:500px"> <table border="0" cellpadding="0" cellspacing="0" id="fundingDialogTitle"> <tr> <td class="fundingDialogTitle">Funding Breakout</td> <td style="text-align:right"><img src="../../images/fscaClose.gif" onclick="CloseFundingDialog()" style="cursor:hand; width:25px; height:25px;"></td> </tr> </table> </div> <div style="padding:10px 10px 10px 10px; width:500px"> <div id="fundingDialogTabs"> <ul> <li><a href="#fragment1"><span>Source</span></a></li> <li><a href="#fragment2"><span>Line Item</span></a></li> </ul> <div id="fragment1"> <table border="0" cellpadding="0" cellspacing="0" id="fundingDialog"> <tr> <th>Funding Source</th> <th>Amount</th> </tr> {#foreach $T.d as fundingList} {#include ROW root=$T.fundingList} {#/for} </table> </div> <div id="fragment2"> Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreet dolore magna aliquam erat volutpat. Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreet dolore magna aliquam erat volutpat. </div> </div> </div> {#/template MAIN} {#template ROW} <tr> <td>{$T.SourceName}</td> <td>{$T.Amount}</td> </tr> {#/template ROW} Here are the json and processTemplate methods: function GetFundingDialog(id) { $.ajax({ type: "POST", url: "../../WebService/Workplan.asmx/GetFundingList", data: "{}", contentType: "application/json; charset=utf-8", dataType: "json", success: function(msg) { ApplyTemplate(msg, id); }, error: function(result) { ShowError(result.responseText); } }); } function ApplyTemplate(msg, id) { var fundingDialog = $("div[id='divFundingList']"); if (fundingDialog.length > 0) { fundingDialog.setTemplateURL('../../usercontrols/Workplan/FundingList.tpl'); fundingDialog.processTemplate(msg); fundingDialog[0].style.display = "block"; var src = $("img[id='openFundingList_"+id+"']"); if (src.length > 0) { var srcPosition = findPos(src[0]); fundingDialog[0].style.top = parseInt(srcPosition[1] + 25); } } $("#fundingDialogTabs").tabs(); }

    Read the article

  • Why are my USB 2.0 devices crashing Windows XP?

    - by BenAlabaster
    Background on the machine I'm having a problem with: The machine was inherited and appears to be circa 2003 (there's a date stamp on the power supply which leads me to this conclusion). I've got it set up as a Skype terminal for my 2 year old to keep in touch with her grandparents and other members of the family - which everyone loves. It has a generic ATX motherboard with no identifying markings other than one stamp that says "Rev.B". CPU-Z identifies the motherboard model as VT8601 but doesn't provide me with any manufacturer name. On board it has 1 x 10/100 LAN, 2 x USB 1.0, VGA, PS/2 for KB and mouse, parallel port, 2 x serial ports, 2 x IDE, 1 x floppy, 2 x SDRAM slots, 1 x CPU housing that is seating a 1.3GHz Intel Celeron CPU, 3 x PCI, 1 x AGP - although you can only use 2 of the PCI slots if you use the AGP slot due to the physical layout of the board. It's got 768Mb PC133 SDRAM - 1 x 512Mb & 1 x 256Mb installed as well as a D-LINK WDA-2320 54G Wi-Fi network card and a generic USB 2.0 expansion board containing 3 x external + 1 x internal USB connectors. It has a DVD+/-RW running as master on IDE1 and a 1.44Mb 3.5" floppy drive connected to the floppy connector. It has an 80Gb Western Digital hard drive running as master on IDE0. All this is sitting in a slimline case. I don't know the wattage of the PSU, but can post this later if this proves to be helpful. The motherboard is running a version of Award BIOS for which I don't have the version number to hand but can again post this later if it would be helpful. The hard disk is freshly formatted and built with Windows XP Professional/Service Pack 3 and is up to date with all current patches. In addition to Windows XP, the only other software it's running is Skype 4.1 (4.2 hangs the whole machine as soon as it starts up, requiring a hard boot to recover). It's got a Daytek MV150 15" touch screen hooked up to the on board VGA and COM1 sockets with the most current drivers from the Daytek website and the most current version of ELO-Touchsystems drivers for the touch component. The webcam is a Logitech Webcam C200 with the latest drivers from the Logitech website. The problem: If I hook any devices to the USB 2.0 sockets, it hangs the whole machine and I have to hard boot it to get it back up. If I have any devices attached to the USB 2.0 sockets when I boot up, it hangs before Windows gets to the login prompt and I have to hard boot it to recover. Workarounds found: I can plug the same devices into the on board USB 1.0 sockets and everything works fine, albeit at reduced performance. I've tried 3 different kinds of USB thumb drives, 3 different makes/models of webcams and my iPhone all with the same effect. They're recognized and don't hang the machine when I hook them to the USB 1.0 but if I hook them to the USB 2.0 ports, the machine hangs within a couple of seconds of recognizing the devices were connected. Attempted solutions: I've seen suggestions that this could be a power problem - that the PSU just doesn't have the wattage to drive these ports. While I'm doubtful this is the problem [after all the motherboard has the same standard connector regardless of the PSU wattage], I tried disabling all the on board devices that I'm not using - on board LAN, the second COM port, the AGP connector etc. through the BIOS in what I'm sure is a futile attempt to reduce the power consumption... I also modified the ACPI and power management settings. It didn't have any noticeable affect, although it didn't do any harm either. Could the wattage of the PSU really cause this problem? If it can, is there anything I need to be aware of when replacing it or do I just need to make sure it's got a higher wattage than the current one? My interpretation was that the wattage only affected the number of drives you could hook up to the power connectors, is that right? I've installed the USB card in another machine and it works without issue, so it's not a problem with the USB card itself, and Windows says the card is installed and working correctly... right up until I connect a device to it. The only thing I haven't done which I only just thought of while writing this essay is trying the USB 2.0 card in a different PCI slot, or re-ordering the wi-fi and USB cards in the slots... although I'm not sure if this will make any difference - does anyone have any experience that would suggest this might work? Other thoughts/questions: Perhaps this is an incompatibility between the USB 2.0 card and the BIOS, would re-flashing the BIOS with a newer version help? Do I need to be able to identify the manufacturer of the motherboard in order to be able to find a BIOS edition specific for this motherboard or will any version of Award BIOS function in its place? Question: Does anyone have any ideas that could help me get my USB 2.0 devices hooked up to this machine?

    Read the article

  • Internet Explorer background-color hover problem

    - by danilo
    I have a strange Problem with table formating using IE 7. My table looks like this: In IE, when using border-collapse, the borders don't get displayed correctly. That's why I used this fix: .table-vmlist td { border-top: 1px solid black; } td.col-vm-status, tr.row-details td { border-left: 1px solid black; } td.col-vm-rdp, tr.row-details td { border-right: 1px solid black; } .table-vmlist { border-bottom: 1px solid black;} When hovering over the row, it gets highlighted with CSS: .table-vmlist tr.row-vm { background-color: #A4C3EF; } .table-vmlist tr.row-vm:hover { background-color: #91BAEF; } Now, in IE 7, when moving the mouse from the top to the bottom of the list, every row gets highlighted correctly and no problems happen. But if I move my mouse pointer from the bottom of the list to the top, every second row seems to loose the border. Can someone explain what the problem is, and how to solve it? This is my markup: <tr class="row-vm"> <td class="col-vm-status status-1"><img title="Host Down" alt="Down" src="/Technik/vm-management/img/hoststatus_1.png"></td> <td class="col-vm-name">V1-VM-1</td> <td class="col-vm-stati"> <img title="Ping" alt="Ping status" src="/Technik/vm-management/img/servicestatus_3.png"> <img title="CPU" alt="CPU status" src="/Technik/vm-management/img/servicestatus_3.png"> <img title="RAM" alt="RAM status" src="/Technik/vm-management/img/servicestatus_3.png"> <img title="C:\ Diskspace" alt="Disk space status" src="/Technik/vm-management/img/servicestatus_3.png"> </td> <td class="col-vm-owner">kus</td> <td class="col-vm-purpose">Citrix Testserver</td> <td class="col-vm-ip">-</td> <td class="col-vm-uptime">-</td> <td class="col-vm-rdp">&nbsp;</td> </tr> And the CSS: /* VM-Tabelle formatieren */ .table-vmlist { border-collapse: collapse; } .table-vmlist tr { border: 1px solid black; } .table-vmlist tr.row-header { border: none; } .table-vmlist tr.row-vm { background-color: #A4C3EF; } .table-vmlist tr.row-vm:hover { background-color: #91BAEF; } .table-vmlist th { text-align: left; } .table-vmlist td { } .table-vmlist th, table td { padding: 2px 0px; } /* Spaltenbreite der VM-Tabelle festlegen */ .table-vmlist #col-status { width: 25px; } .table-vmlist #col-stati { width: 90px; } .table-vmlist #col-owner { width: 90px; } .table-vmlist #col-ip { width: 100px; } .table-vmlist #col-uptime { width: 70px; } .table-vmlist #col-rdp { width: 25px; } .table-vmlist tr.row-details td { padding: 0px 10px; } /* Kein Rahmen um verlinkte Bilder */ a img { border-width: 0px; } /* Für Einschaltknopf Hand-Cursor anstatt normalen Pfeil anzeigen */ td.status-1 img { cursor: pointer; } img.ajax-loader { margin-left: 2px; } IE fix: .table-vmlist td { border-top: 1px solid black; } td.col-vm-status, tr.row-details td { border-left: 1px solid black; } td.col-vm-rdp, tr.row-details td { border-right: 1px solid black; } .table-vmlist { border-bottom: 1px solid black;}

    Read the article

  • Why are my USB 2.0 devices hanging Windows XP?

    - by BenAlabaster
    Background on the machine I'm having a problem with: The machine was inherited and appears to be circa 2003 (there's a date stamp on the power supply which leads me to this conclusion). I've got it set up as a Skype terminal for my 2 year old to keep in touch with her grandparents and other members of the family - which everyone loves. It has a generic ATX motherboard with no identifying markings other than one stamp that says "Rev.B". CPU-Z identifies the motherboard model as VT8601 but doesn't provide me with any manufacturer name. On board it has 1 x 10/100 LAN, 2 x USB 1.0, VGA, PS/2 for KB and mouse, parallel port, 2 x serial ports, 2 x IDE, 1 x floppy, 2 x SDRAM slots, 1 x CPU housing that is seating a 1.3GHz Intel Celeron CPU, 3 x PCI, 1 x AGP - although you can only use 2 of the PCI slots if you use the AGP slot due to the physical layout of the board. It's got 768Mb PC133 SDRAM - 1 x 512Mb & 1 x 256Mb installed as well as a D-LINK WDA-2320 54G Wi-Fi network card and a generic USB 2.0 expansion board containing 3 x external + 1 x internal USB connectors - it has a NEC uPD720102 chipset. It has a DVD+/-RW running as master on IDE1 and a 1.44Mb 3.5" floppy drive connected to the floppy connector. It has an 80Gb Western Digital hard drive running as master on IDE0. All this is sitting in a slimline case. I don't know the wattage of the PSU, but can post this later if this proves to be helpful. The motherboard is running a version of Award BIOS for which I don't have the version number to hand but can again post this later if it would be helpful. The hard disk is freshly formatted and built with Windows XP Professional/Service Pack 3 and is up to date with all current patches. In addition to Windows XP, the only other software it's running is Skype 4.1 (4.2 hangs the whole machine as soon as it starts up, requiring a hard boot to recover). It's got a Daytek MV150 15" touch screen hooked up to the on board VGA and COM1 sockets with the most current drivers from the Daytek website and the most current version of ELO-Touchsystems drivers for the touch component. The webcam is a Logitech Webcam C200 with the latest drivers from the Logitech website. The problem: If I hook any devices to the USB 2.0 sockets, it hangs the whole machine and I have to hard boot it to get it back up. If I have any devices attached to the USB 2.0 sockets when I boot up, it hangs before Windows gets to the login prompt and I have to hard boot it to recover. Workarounds found: I can plug the same devices into the on board USB 1.0 sockets and everything works fine, albeit at reduced performance. I've tried 3 different kinds of USB thumb drives, 3 different makes/models of webcams and my iPhone all with the same effect. They're recognized and don't hang the machine when I hook them to the USB 1.0 but if I hook them to the USB 2.0 ports, the machine hangs within a couple of seconds of recognizing the devices were connected. Attempted solutions: I've seen suggestions that this could be a power problem - that the PSU just doesn't have the wattage to drive these ports. While I'm doubtful this is the problem [after all the motherboard has the same standard connector regardless of the PSU wattage], I tried disabling all the on board devices that I'm not using - on board LAN, the second COM port, the AGP connector etc. through the BIOS in what I'm sure is a futile attempt to reduce the power consumption... I also modified the ACPI and power management settings. It didn't have any noticeable affect, although it didn't do any harm either. Could the wattage of the PSU really cause this problem? If it can, is there anything I need to be aware of when replacing it or do I just need to make sure it's got a higher wattage than the current one? My interpretation was that the wattage only affected the number of drives you could hook up to the power connectors, is that right? I've installed the USB card in another machine and it works without issue, so it's not a problem with the USB card itself, and Windows says the card is installed and working correctly... right up until I connect a device to it. The only thing I haven't done which I only just thought of while writing this essay is trying the USB 2.0 card in a different PCI slot, or re-ordering the wi-fi and USB cards in the slots... although I'm not sure if this will make any difference - does anyone have any experience that would suggest this might work? Other thoughts/questions: Perhaps this is an incompatibility between the USB 2.0 card and the BIOS, would re-flashing the BIOS with a newer version help? Do I need to be able to identify the manufacturer of the motherboard in order to be able to find a BIOS edition specific for this motherboard or will any version of Award BIOS function in its place? Question: Does anyone have any ideas that could help me get my USB 2.0 devices hooked up to this machine? Edit: Updated the USB 2.0 info with reference to actual card - http://www.xpcgear.com/lpnec4u.html

    Read the article

  • How to add addtional disks to a Windows 2008 KVM based Guest?

    - by taazaa
    I have a Win 2008 KVM based guest VM running on a Ubuntu 10 host. It is a raw image of 22G. I want to add a "data" drive which would show up as "D:\" drive on the guest. I first created a raw image using: qemu-img create -f raw ~/vmdisk2.img 50G Then, tried attaching it using virsh attach-disk. When that did not work, I tried editing the xml file of the VM directly. Both did not seem to work. I would greatly appreciate any help on how to do this and what the best practice is. I want to keep the base image small, so that I can clone it (hopefully) and then attach necessary storage based on the application at hand. Update: The xml of the vm before adding the second drive: <domain type='kvm'> <name>win08e-vm1</name> <uuid>183a4ba0-1c0b-0b04-ad01-aa7c3a4cb390</uuid> <memory>1048576</memory> <currentMemory>1048576</currentMemory> <vcpu>2</vcpu> <os> <type arch='x86_64' machine='pc-0.12'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='localtime'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/bin/kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/var/lib/libvirt/images/win08e-vm1.img'/> <target dev='hda' bus='ide'/> <address type='drive' controller='0' bus='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/home/taazaa/iso/Win08ER264.iso'/> <target dev='hdc' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='1' unit='0'/> </disk> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <interface type='bridge'> <mac address='52:54:00:7f:a7:ae'/> <source bridge='br0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <input type='tablet' bus='usb'/> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' keymap='en-us'/> <video> <model type='vga' vram='9216' heads='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </memballoon> </devices> </domain> Thanks!

    Read the article

  • Listening for TCP and UDP requests on the same port

    - by user339328
    I am writing a Client/Server set of programs Depending on the operation requested by the client, I use make TCP or UDP request. Implementing the client side is straight-forward, since I can easily open connection with any protocol and send the request to the server-side. On the servers-side, on the other hand, I would like to listen both for UDP and TCP connections on the same port. Moreover, I like the the server to open new thread for each connection request. I have adopted the approach explained in: link text I have extended this code sample by creating new threads for each TCP/UDP request. This works correctly if I use TCP only, but it fails when I attempt to make UDP bindings. Please give me any suggestion how can I correct this. tnx Here is the Server Code: public class Server { public static void main(String args[]) { try { int port = 4444; if (args.length > 0) port = Integer.parseInt(args[0]); SocketAddress localport = new InetSocketAddress(port); // Create and bind a tcp channel to listen for connections on. ServerSocketChannel tcpserver = ServerSocketChannel.open(); tcpserver.socket().bind(localport); // Also create and bind a DatagramChannel to listen on. DatagramChannel udpserver = DatagramChannel.open(); udpserver.socket().bind(localport); // Specify non-blocking mode for both channels, since our // Selector object will be doing the blocking for us. tcpserver.configureBlocking(false); udpserver.configureBlocking(false); // The Selector object is what allows us to block while waiting // for activity on either of the two channels. Selector selector = Selector.open(); tcpserver.register(selector, SelectionKey.OP_ACCEPT); udpserver.register(selector, SelectionKey.OP_READ); System.out.println("Server Sterted on port: " + port + "!"); //Load Map Utils.LoadMap("mapa"); System.out.println("Server map ... LOADED!"); // Now loop forever, processing client connections while(true) { try { selector.select(); Set<SelectionKey> keys = selector.selectedKeys(); // Iterate through the Set of keys. for (Iterator<SelectionKey> i = keys.iterator(); i.hasNext();) { SelectionKey key = i.next(); i.remove(); Channel c = key.channel(); if (key.isAcceptable() && c == tcpserver) { new TCPThread(tcpserver.accept().socket()).start(); } else if (key.isReadable() && c == udpserver) { new UDPThread(udpserver.socket()).start(); } } } catch (Exception e) { e.printStackTrace(); } } } catch (Exception e) { e.printStackTrace(); System.err.println(e); System.exit(1); } } } The UDPThread code: public class UDPThread extends Thread { private DatagramSocket socket = null; public UDPThread(DatagramSocket socket) { super("UDPThread"); this.socket = socket; } @Override public void run() { byte[] buffer = new byte[2048]; try { DatagramPacket packet = new DatagramPacket(buffer, buffer.length); socket.receive(packet); String inputLine = new String(buffer); String outputLine = Utils.processCommand(inputLine.trim()); DatagramPacket reply = new DatagramPacket(outputLine.getBytes(), outputLine.getBytes().length, packet.getAddress(), packet.getPort()); socket.send(reply); } catch (IOException e) { e.printStackTrace(); } socket.close(); } } I receive: Exception in thread "UDPThread" java.nio.channels.IllegalBlockingModeException at sun.nio.ch.DatagramSocketAdaptor.receive(Unknown Source) at server.UDPThread.run(UDPThread.java:25) 10x

    Read the article

  • Lockless queue implementation ends up having a loop under stress

    - by Fozi
    I have lockless queues written in C in form of a linked list that contains requests from several threads posted to and handled in a single thread. After a few hours of stress I end up having the last request's next pointer pointing to itself, which creates an endless loop and locks up the handling thread. The application runs (and fails) on both Linux and Windows. I'm debugging on Windows, where my COMPARE_EXCHANGE_PTR maps to InterlockedCompareExchangePointer. This is the code that pushes a request to the head of the list, and is called from several threads: void push_request(struct request * volatile * root, struct request * request) { assert(request); do { request->next = *root; } while(COMPARE_EXCHANGE_PTR(root, request, request->next) != request->next); } This is the code that gets a request from the end of the list, and is only called by a single thread that is handling them: struct request * pop_request(struct request * volatile * root) { struct request * volatile * p; struct request * request; do { p = root; while(*p && (*p)->next) p = &(*p)->next; // <- loops here request = *p; } while(COMPARE_EXCHANGE_PTR(p, NULL, request) != request); assert(request->next == NULL); return request; } Note that I'm not using a tail pointer because I wanted to avoid the complication of having to deal with the tail pointer in push_request. However I suspect that the problem might be in the way I find the end of the list. There are several places that push a request into the queue, but they all look generaly like this: // device->requests is defined as struct request * volatile requests; struct request * request = malloc(sizeof(struct request)); if(request) { // fill out request fields push_request(&device->requests, request); sem_post(device->request_sem); } The code that handles the request is doing more than that, but in essence does this in a loop: if(sem_wait_timeout(device->request_sem, timeout) == sem_success) { struct request * request = pop_request(&device->requests); // handle request free(request); } I also just added a function that is checking the list for duplicates before and after each operation, but I'm afraid that this check will change the timing so that I will never encounter the point where it fails. (I'm waiting for it to break as I'm writing this.) When I break the hanging program the handler thread loops in pop_request at the marked position. I have a valid list of one or more requests and the last one's next pointer points to itself. The request queues are usually short, I've never seen more then 10, and only 1 and 3 the two times I could take a look at this failure in the debugger. I thought this through as much as I could and I came to the conclusion that I should never be able to end up with a loop in my list unless I push the same request twice. I'm quite sure that this never happens. I'm also fairly sure (although not completely) that it's not the ABA problem. I know that I might pop more than one request at the same time, but I believe this is irrelevant here, and I've never seen it happening. (I'll fix this as well) I thought long and hard about how I can break my function, but I don't see a way to end up with a loop. So the question is: Can someone see a way how this can break? Can someone prove that this can not? Eventually I will solve this (maybe by using a tail pointer or some other solution - locking would be a problem because the threads that post should not be locked, I do have a RW lock at hand though) but I would like to make sure that changing the list actually solves my problem (as opposed to makes it just less likely because of different timing).

    Read the article

  • DNS no longer works after server reboot

    - by Burning the Codeigniter
    Strangely enough, when I reboot my Ubuntu 12.04 server, the DNS no longer works, which makes the domain unavailable to access to my site. Normally the DNS should be working after a reboot, but this doesn't happen anymore. I use nginx to serve content, but nginx is already configured to work with my domains. What are the typical practises must I do after a reboot and how can I solve this issue I experience? I already have BIND, networking and resolvconf to boot when the server boots up. ; <<>> DiG 9.8.1-P1 <<>> mysite.com ;; global options: +cmd ;; connection timed out; no servers could be reached This is my output with dig $ttl 38400 mysite.com. IN SOA ns1.mysite.com. webmaster.mysite.com. ( 1055026205 6H 1H 5D 20M ) mysite.com. IN A xx.xx.xx.xx # Server IP *.mysite.com. IN A xx.xx.xx.xx # Server IP www.mysite.com. IN CNAME mysite.com. ns1.mysite.com. IN A xx.xx.xx.xx # Server 2nd IP ns2.mysite.com. IN A xx.xx.xx.xx # Server 3rd IP mysite.com. IN NS ns1.mysite.com. mysite.com. IN NS ns2.mysite.com. mail.mysite.com. IN MX 1 mysite.com. This is the contents of /etc/resolv.conf # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN nameserver 85.17.150.123 nameserver 85.17.96.69 nameserver 62.212.64.122 search localdomain After using more dig commands, outputs: ; <<>> DiG 9.7.3-P3 <<>> @85.17.150.123 mysite.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 24847 ;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; WARNING: recursion requested but not available ;; QUESTION SECTION: ;mysite.com. IN A ;; Query time: 2145 msec ;; SERVER: 85.17.150.123#53(85.17.150.123) ;; WHEN: Mon Nov 5 16:31:32 2012 ;; MSG SIZE rcvd: 30 ; <<>> DiG 9.7.3-P3 <<>> @85.17.96.69 mysite.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 27879 ;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; WARNING: recursion requested but not available ;; QUESTION SECTION: ;mysite.com. IN A ;; Query time: 949 msec ;; SERVER: 85.17.96.69#53(85.17.96.69) ;; WHEN: Mon Nov 5 16:32:59 2012 ;; MSG SIZE rcvd: 30 ; <<>> DiG 9.7.3-P3 <<>> @62.212.64.122 mysite.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 29293 ;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; WARNING: recursion requested but not available ;; QUESTION SECTION: ;mysite.com. IN A ;; Query time: 825 msec ;; SERVER: 62.212.64.122#53(62.212.64.122) ;; WHEN: Mon Nov 5 16:33:39 2012 ;; MSG SIZE rcvd: 30 With Google DNS (8.8.8.8): ; <<>> DiG 9.7.3-P3 <<>> @8.8.8.8 mysite.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 38498 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;mysite.com. IN A ;; Query time: 3982 msec ;; SERVER: 8.8.8.8#53(8.8.8.8) ;; WHEN: Mon Nov 5 16:37:27 2012 ;; MSG SIZE rcvd: 30

    Read the article

  • No data when attempting to get JSONP data from cross domain PHP script

    - by Alex
    I am trying to pull latitude and longitude values from another server on a different domain using a singe id string. I am not very familiar with JQuery, but it seems to be the easiest way to go about getting around the same origin problem. I am unable to use iframes and I cannot install PHP on the server running this javascript, which is forcing my hand on this. My queries appear to be going through properly, but I am not getting any results back. I was hoping someone here might have an idea that could help, seeing as I probably wouldn't recognize most obvious errors here. My javascript function is: var surl = "http://...omitted.../pull.php"; var idnum = 5a; //in practice this is defined above alert("BEFORE"); $.ajax({ url: surl, data: {id: idnum}, dataType: "jsonp", jsonp : "callback", jsonp: "jsonpcallback", success: function (rdata) { alert(rdata.lat + ", " + rdata.lon); } }); alert("BETWEEN"); function jsonpcallback(rtndata) { alert("CALLED"); alert(rtndata.lat + ", " + rtndata.lon); } alert("AFTER"); When my javascript is run, the BEFORE, BETWEEN and AFTER alerts are displayed. The CALLED and other jsonpcallback alerts are not shown. Is there another way to tell if the jsoncallback function has been called? Below is the PHP code I have running on the second server. I added the count table to my database just so that I can tell when this script is run. Every time I call the javascript, count has had an extra item inserted and the id number is correct. <?php header("content-type: application/json"); if (isset($_GET['id']) || isset($_POST['id'])){ $db_handle = mysql_connect($server, $username, $password); if (!$db_handle) { die('Could not connect: ' . mysql_error()); } $db_found = mysql_select_db($database, $db_handle); if ($db_found) { if (isset($_POST['id'])){ $SQL = sprintf("SELECT * FROM %s WHERE loc_id='%s'", $loctable, mysql_real_escape_string($_POST['id'])); } if (isset($_GET['id'])){ $SQL = sprintf("SELECT * FROM %s WHERE loc_id='%s'", $loctable, mysql_real_escape_string($_GET['id'])); } $result = mysql_query($SQL, $db_handle); $db_field = mysql_fetch_assoc($result); $rtnjsonobj -> lat = $db_field["lat"]; $rtnjsonobj -> lon = $db_field["lon"]; if (isset($_POST['id'])){ echo $_POST['jsonpcallback']. '('. json_encode($rtnjsonobj) . ')'; } if (isset($_GET['id'])){ echo $_GET['jsonpcallback']. '('. json_encode($rtnjsonobj) . ')'; } $SQL = sprintf("INSERT INTO count (bullshit) VALUES ('%s')", $_GET['id']); $result = mysql_query($SQL, $db_handle); $db_field = mysql_fetch_assoc($result); } mysql_close($db_handle); } else { $rtnjsonobj -> lat = 404; $rtnjsonobj -> lon = 404; echo $_GET['jsonpcallback']. '('. json_encode($rtnjsonobj) . ')'; }?> I am not entirely sure if the jsonp returned by this PHP is correct. When I go directly to the PHP script without including any parameters, I do get the following. ({"lat":404,"lon":404}) The callback function is not included, but that much can be expected when it isn't included in the original call. Does anyone have any idea what might be going wrong here? Thanks in advance!

    Read the article

  • MVC 2 Ajax.Beginform passes returned Html + Json to javascript function

    - by Joe
    Hi, I have a small partial Create Person form in a page above a table of results. I want to be able to post the form to the server, which I can do no problem with ajax.Beginform. <% using (Ajax.BeginForm("Create", new AjaxOptions { OnComplete = "ProcessResponse" })) {%> <fieldset> <legend>Fields</legend> <div class="editor-label"> <%=Html.LabelFor(model => model.FirstName)%> </div> <div class="editor-field"> <%=Html.TextBoxFor(model => model.FirstName)%> <%=Html.ValidationMessageFor(model => model.FirstName)%> </div> <div class="editor-label"> <%=Html.LabelFor(model => model.LastName)%> </div> <div class="editor-field"> <%=Html.TextBoxFor(model => model.LastName)%> <%=Html.ValidationMessageFor(model => model.LastName)%> </div> <p> <input type="submit" /> </p> </fieldset> <% } %> Then in my controller I want to be able to post back a partial which is just a table row if the create is successful and append it to the table, which I can do easily with jquery. $('#personTable tr:last').after(data); However, if server validation fails I want to pass back my partial create person form with the validation errors and replace the existing Create Person form. I have tried returning a Json array Controller: return Json(new { Success = true, Html= this.RenderViewToString("PersonSubform",person) }); Javascript: var json_data = response.get_response().get_object(); with a pass/fail flag and the partial rendered as a string using the solition below but that doesnt render the mvc validation controls when the form fails. SO RenderPartialToString So, is there any way I can hand my javascript the out of the box PartialView("PersonForm") as its returned from my ajax.form? Can I pass some addition info as a Json array so I can tell if its pass or fail and maybe add a message? UPDATE I can now pass the HTML of a PartialView to my javascript but I need to pass some additional data pairs like ServerValidation : true/false and ActionMessage : "you have just created a Person Bill". Ideally I would pass a Json array rather than hidden fields in my partial. function ProcessResponse(response) { var html = response.get_data(); $("#campaignSubform").html(html); } Many thanks in advance

    Read the article

  • OpenVPN IPv6 over IPv4 tunnel

    - by user66779
    Today I installed OpenVPN 2.3rc2 on both my windows 7 client machine and centos 6 server. This new version of OpenVPN provides full compatibility for IPv6. The Problem: I am currently able to connect to the server (through the IPv4 tunnel) and ping the IPv6 address which is assigned to my client and I can also ping the tun0 interface on the server. However, I cannot browse to any IPv6 websites. My vps provider has given me this: 2607:f840:0044:0022:0000:0000:0000:0000/64 is routed to this server (2607:f840:0:3f:0:0:0:eda). This is ifconfig after setup with OpenVPN running: eth0 Link encap:Ethernet HWaddr 00:16:3E:12:77:54 inet addr:208.111.39.160 Bcast:208.111.39.255 Mask:255.255.255.0 inet6 addr: 2607:f740:0:3f::eda/64 Scope:Global inet6 addr: fe80::216:3eff:fe12:7754/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2317253 errors:0 dropped:7263 overruns:0 frame:0 TX packets:1977414 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1696120096 (1.5 GiB) TX bytes:1735352992 (1.6 GiB) Interrupt:29 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.8.0.1 P-t-P:10.8.0.2 Mask:255.255.255.255 inet6 addr: 2607:f740:44:22::1/64 Scope:Global UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:739567 errors:0 dropped:0 overruns:0 frame:0 TX packets:1218240 errors:0 dropped:1542 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:46512557 (44.3 MiB) TX bytes:1559930874 (1.4 GiB) So OpenVPN is sucessfully creating a tun0 interface and assigning clients IPv6 addresses using 2607:f840:44:22::/64. The first client to connect is getting 2607:f840:44:22::1000 and the second 2607:f840:44:22::1001, and so on... plus 1 each time. After connecting as the first client, I can ping from my windows client machine 2607:f740:44:22::1 and 2607:f740:44:22::1000. However, I have no access to IPv6 websites. I believe the problem is that the tun0 IPv6 addressees are not being forwarded to the eth0 interface. This is the firewall running on the server: #!/bin/sh # # iptables configuration script # # Flush all current rules from iptables # iptables -F iptables -t nat -F # # Allow SSH connections on tcp port 22 # iptables -A INPUT -i eth0 -p tcp --dport 22 -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 22 -j ACCEPT # # Set access for localhost # iptables -A INPUT -i lo -j ACCEPT # # Accept connections on 1195 for vpn access from client # iptables -A INPUT -i eth0 -p udp --dport 1195 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p udp --sport 1195 -m state --state ESTABLISHED -j ACCEPT # # Apply forwarding for OpenVPN Tunneling # iptables -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT iptables -A FORWARD -s 10.8.0.0/24 -j ACCEPT iptables -t nat -A POSTROUTING -o eth0 -j SNAT --to 209.111.39.160 iptables -A FORWARD -j REJECT # # Enable forwarding # echo 1 > /proc/sys/net/ipv4/ip_forward # # Set default policies for INPUT, FORWARD and OUTPUT chains # iptables -P INPUT ACCEPT iptables -P FORWARD ACCEPT iptables -P OUTPUT ACCEPT # # IPv6 # IP6TABLES=/sbin/ip6tables $IP6TABLES -F INPUT $IP6TABLES -F FORWARD $IP6TABLES -F OUTPUT echo -n "1" >/proc/sys/net/ipv6/conf/all/forwarding echo -n "1" >/proc/sys/net/ipv6/conf/all/proxy_ndp echo -n "0" >/proc/sys/net/ipv6/conf/all/autoconf echo -n "0" >/proc/sys/net/ipv6/conf/all/accept_ra $IP6TABLES -A INPUT -i eth0 -m state --state ESTABLISHED,RELATED -j ACCEPT $IP6TABLES -A INPUT -i eth0 -p tcp --dport 22 -j ACCEPT $IP6TABLES -A INPUT -i eth0 -p icmpv6 -j ACCEPT $IP6TABLES -P INPUT ACCEPT $IP6TABLES -P FORWARD ACCEPT $IP6TABLES -P OUTPUT ACCEPT Server.conf: server-ipv6 2607:f840:44:22::/64 server 10.8.0.0 255.255.255.0 port 1195 proto udp dev tun ca ca.crt cert server.crt key server.key dh dh2048.pem ifconfig-pool-persist ipp.txt push "redirect-gateway def1 bypass-dhcp" push "dhcp-option DNS 208.67.222.222" push "dhcp-option DNS 208.67.220.220" keepalive 10 60 tls-auth ta.key 0 cipher AES-256-CBC comp-lzo user nobody group nobody persist-key persist-tun status openvpn-status.log log-append openvpn.log verb 5 Client.conf: client dev tun nobind keepalive 10 60 hand-window 15 remote 209.111.39.160 1195 udp persist-key persist-tun ca ca.crt key client1.key cert client1.crt remote-cert-tls server tls-auth ta.key 1 comp-lzo verb 3 cipher AES-256-CBC I'm not sure where I am going wrong, it could be the firewall, or something missing from server or client.conf. This version of OpenVPN was only released yesterday, and there's little info on the internet about how to setup an IPv6 over IPv4 vpn tunnel. I've read the manual for this new version of OpenVPN (parts pertaining to IPv6) and it provides very little info too. Thanks for any help.

    Read the article

  • When saving a model with has_one or has_many associations, which side of the association is saved fi

    - by SeeBees
    I have three simplified models: class Team < ActiveRecord::Base has_many :players has_one :coach end class Player < ActiveRecord::Base belongs_to :team validates_presence_of :team_id end class Coach < ActiveRecord::Base belongs_to :team validates_presence_of :team_id end I use the following code to test these models: t = Team.new team.coach = Coach.new team.save! team.save! returns true. But in another test: t = Team.new team.players << Player.new team.save! team.save! gives the following error: > ActiveRecord::RecordInvalid: > Validation failed: Players is invalid > from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/validations.rb:1090:in > `save_without_dirty!' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/dirty.rb:87:in `save_without_transactions!' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/transactions.rb:200:in > `save!' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connection_adapters/abstract/database_statements.rb:136:in > `transaction' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/transactions.rb:182:in > `transaction' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/transactions.rb:200:in > `save!' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/transactions.rb:208:in > `rollback_active_record_state!' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/transactions.rb:200:in > `save!' from (irb):14 I figured out that when team.save! is called, it first calls player.save!. player needs to validate the presence of the id of the associated team. But at the time player.save! is called, team hasn't been saved yet, and therefore, team_id doesn't yet exist for player. This fails the player's validation, so the error occurs. But on the other hand, team is saved before coach.save!, otherwise the first example will get the same error as the second one. So I've concluded that when a has_many bs, a.save! will save bs prior to a. When a has_one b, a.save! will save a prior to b. If I am right, why is this the case? It doesn't seem logical to me. Why do has_one and has_many association have different order in saving? Any ideas? And is there any way I can change the order? Say I want to have the same saving order for both has_one and has_many. Thanks.

    Read the article

  • Little Regular Expression (against HTML) help

    - by Marcos Placona
    Hi, I have the following HTML <p>Some text <a title="link" href="http://link.com/" target="_blank">my link</a> more text <a title="link" href="http://link.com/" target="_blank">more link</a>.</p> <p>Another paragraph.</p> <p>[code:cf]</p> <p>&lt;cfset ArrFruits = ["Orange", "Apple", "Peach", "Blueberry", </p> <p>"Blackberry", "Strawberry", "Grape", "Mango", </p> <p>"Clementine", "Cherry", "Plum", "Guava", </p> <p>"Cranberry"]&gt;</p> <p>[/code]</p> <p>Another line</p> <p><img src="http://image.jpg" alt="Array" /> </p> <p>More text</p> <p>[code:cf]</p> <p>&lt;table border="1"&gt;</p> <p> &lt;cfoutput&gt;</p> <p> &lt;cfloop array="#GroupsOf(ArrFruits, 5)#" index="arrFruitsIX"&gt;</p> <p>  &lt;tr&gt;</p> <p> &lt;cfloop array="#arrFruitsIX#" index="arrFruit"&gt;</p> <p>     &lt;td&gt;#arrFruit#&lt;/td&gt;</p> <p> &lt;/cfloop&gt;</p> <p>  &lt;/tr&gt;</p> <p> &lt;/cfloop&gt;</p> <p> &lt;/cfoutput&gt;</p> <p>&lt;/table&gt;</p> <p>[/code]</p> <p>With an output that looks like:</p> <p><img src="another_image.jpg" alt="" width="342" height="85" /></p> What I'm trying to do, is write a regular expression that will remove all the or , and whenever it finds a , it will replace it with a line-break. So far, my pattern looks like this: /\<p\>(.*?)(<\/p>)/g And I'm replacing the matches with: $1\n It all looks good, but it's also replacing the contents inside the [code][/code] tags, which in this case should not replace the tags at all, so as a result, i would lkike to get rid of the tags, when the content isn't inside the [code] tags. I can't ever get negation right, I know it will be something along the lines of \<p\>^\[code*\](.*?)(<\/p>) But obviously this doesn't work :-) Could anyone please lend me a hand with this regex? BTW, I know I shouldn't be using regular expressions to parse HTML at all. I'm fully aware of that, but still, for this specific case, I'd like to use regex. Thanks in advance

    Read the article

  • LXC Container Networking

    - by digitaladdictions
    I just started to experiment with LXC containers. I was able to create a container and start it up but I cannot get dhcp to assign the container an IP address. If I assign a static address the container can ping the host IP but not outside the host IP. The host is CentOS 6.5 and the guest is Ubuntu 14.04LTS. I used the template downloaded by lxc-create -t download -n cn-01 command. If I am trying to get an IP address on the same subnet as the host I don't believe I should need the IP tables rule for masquerading but I added it anyways. Same with IP forwarding. I compiled LXC by hand from the following source https://linuxcontainers.org/downloads/lxc-1.0.4.tar.gz Host Operating System Version #> cat /etc/redhat-release CentOS release 6.5 (Final) #> uname -a Linux localhost.localdomain 2.6.32-431.20.3.el6.x86_64 #1 SMP Thu Jun 19 21:14:45 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Container Config #> cat /usr/local/var/lib/lxc/cn-01/config # Template used to create this container: /usr/local/share/lxc/templates/lxc-download # Parameters passed to the template: # For additional config options, please look at lxc.container.conf(5) # Distribution configuration lxc.include = /usr/local/share/lxc/config/ubuntu.common.conf lxc.arch = x86_64 # Container specific configuration lxc.rootfs = /usr/local/var/lib/lxc/cn-01/rootfs lxc.utsname = cn-01 # Network configuration lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 LXC default.confu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:12:30:f2 brd ff:ff:ff:ff:f #> cat /usr/local/etc/lxc/default.conf lxc.network.type = veth lxc.network.link = br0 lxc.network.flags = up #> lxc-checkconfig Kernel configuration not found at /proc/config.gz; searching... Kernel configuration found at /boot/config-2.6.32-431.20.3.el6.x86_64 --- Namespaces --- Namespaces: enabled Utsname namespace: enabled Ipc namespace: enabled Pid namespace: enabled User namespace: enabled Network namespace: enabled Multiple /dev/pts instances: enabled --- Control groups --- Cgroup: enabled Cgroup namespace: enabled Cgroup device: enabled Cgroup sched: enabled Cgroup cpu account: enabled Cgroup memory controller: /usr/local/bin/lxc-checkconfig: line 103: [: too many arguments enabled Cgroup cpuset: enabled --- Misc --- Veth pair device: enabled Macvlan: enabled Vlan: enabled File capabilities: /usr/local/bin/lxc-checkconfig: line 118: [: -gt: unary operator expected Note : Before booting a new kernel, you can check its configuration usage : CONFIG=/path/to/config /usr/local/bin/lxc-checkconfig Network Config (HOST) #> cat /etc/sysconfig/network-scripts/ifcfg-br0 DEVICE=br0 TYPE=Bridge BOOTPROTO=dhcp ONBOOT=yes #> cat /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 ONBOOT=yes TYPE=Ethernet IPV6INIT=no USERCTL=no BRIDGE=br0 #> cat /etc/networks default 0.0.0.0 loopback 127.0.0.0 link-local 169.254.0.0 #> ip a s 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:12:30:f2 brd ff:ff:ff:ff:ff:ff inet6 fe80::20c:29ff:fe12:30f2/64 scope link valid_lft forever preferred_lft forever 3: pan0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 42:7e:43:b3:61:c5 brd ff:ff:ff:ff:ff:ff 4: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 00:0c:29:12:30:f2 brd ff:ff:ff:ff:ff:ff inet 10.60.70.121/24 brd 10.60.70.255 scope global br0 inet6 fe80::20c:29ff:fe12:30f2/64 scope link valid_lft forever preferred_lft forever 12: vethT6BGL2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether fe:a1:69:af:50:17 brd ff:ff:ff:ff:ff:ff inet6 fe80::fca1:69ff:feaf:5017/64 scope link valid_lft forever preferred_lft forever #> brctl show bridge name bridge id STP enabled interfaces br0 8000.000c291230f2 no eth0 vethT6BGL2 pan0 8000.000000000000 no #> cat /proc/sys/net/ipv4/ip_forward 1 # Generated by iptables-save v1.4.7 on Fri Jul 11 15:11:36 2014 *nat :PREROUTING ACCEPT [34:6287] :POSTROUTING ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A POSTROUTING -o eth0 -j MASQUERADE COMMIT # Completed on Fri Jul 11 15:11:36 2014 Network Config (Container) #> cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp #> ip a s 11: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 02:69:fb:42:ee:d7 brd ff:ff:ff:ff:ff:ff inet6 fe80::69:fbff:fe42:eed7/64 scope link valid_lft forever preferred_lft forever 13: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever

    Read the article

  • jQuery - animating 'left' position of absolutely positioned div when sliding panel is revealed

    - by trickymatt
    Hello, I have a hidden panel off the left side of the screen which slides into view on the click of a 'tab' positioned on the left side of the screen. I need the panel to slide over the top of the existing page content, and I need the tab to move with it. and so both are absolutely positioned in css. Everything works fine, apart from I need the tab (and thus the tab-container) to move left with the panel when it is revealed, so it appears to be stuck to the right-hand-side of the panel. Its relatively simple when using floats, but of course this affects the layout of the existing content, hence absolute positioning. I have tried animating the left position of the panel-container (see the documented jquery function), but I cant get it to work. My HTML <div><!--sample page content--> <p> Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et </p> </div> <div id="panel" class="height"> <!--the hidden panel --> <div class="content"> <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore</p> </div> </div> <!--if javascript is disabled use this link--> <div id="tab-container" class="height"> <a href="#" onclick="return()"> <div id="tab"><!-- this will activate the panel. --></div> </a> </div> My jQuery $(document).ready(function(){ $("#panel, .content").hide(); //hides the panel and content from the user $('#tab').toggle(function(){ //adding a toggle function to the #tab $('#panel').stop().animate({width:"400px", opacity:0.8}, 100, //sliding the #panel to 400px // THIS NEXT FUNCTION DOES NOT WORK --> function() { $('#tab-container').animate({left:"400px"} //400px to match the panel width }); function() { $('.content').fadeIn('slow'); //slides the content into view. }); }, function(){ //when the #tab is next cliked $('.content').fadeOut('slow', function() { //fade out the content $('#panel').stop().animate({width:"0", opacity:0.1}, 500); //slide the #panel back to a width of 0 }); }); }); and this is the css #panel { position:absolute; left:0px; top:50px; background-color:#999999; height:500px; display:none;/*hide the panel if Javascript is not running*/ } #panel .content { width:290px; margin-left:30px; } #tab-container{ position:absolute; top:20px; width:50px; height:620px; background:#161616; } #tab { width:50px; height:150px; margin-top:100px; display:block; cursor:pointer; background:#DDD; } Many thanks

    Read the article

< Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >