Search Results

Search found 52963 results on 2119 pages for 'web interface'.

Page 155/2119 | < Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >

  • Implementation/interface inheritance design question.

    - by Neil G
    I would like to get the stackoverflow community's opinion on the following three design patterns. The first is implementation inheritance; the second is interface inheritance; the third is a middle ground. My specific question is: Which is best? implementation inheritance: class Base { X x() const = 0; void UpdateX(A a) { y_ = g(a); } Y y_; } class Derived: Base { X x() const { return f(y_); } } interface inheritance: class Base { X x() const = 0; void UpdateX(A a) = 0; } class Derived: Base { X x() const { return x_; } void UpdateX(A a) { x_ = f(g(a)); } X x_; } middle ground: class Base { X x() const { return x_; } void UpdateX(A a) = 0; X x_; } class Derived: Base { void UpdateX(A a) { x_ = f(g(a)); } } I know that many people prefer interface inheritance to implementation inheritance. However, the advantage of the latter is that with a pointer to Base, x() can be inlined and the address of x_ can be statically calculated.

    Read the article

  • Derived interface from generic method

    - by Sunit
    I'm trying to do this: public interface IVirtualInterface{ } public interface IFabricationInfo : IVirtualInterface { int Type { get; set; } int Requirement { get; set; } } public interface ICoatingInfo : IVirtualInterface { int Type { get; set; } int Requirement { get; set; } } public class FabInfo : IFabricationInfo { public int Requirement { get { return 1; } set { } } public int Type { get {return 1;} set{} } } public class CoatInfo : ICoatingInfo { public int Type { get { return 1; } set { } } public int Requirement { get { return 1; } set { } } } public class BusinessObj { public T VirtualInterface<T>() where T : IVirtualInterface { Type targetInterface = typeof(T); if (targetInterface.IsAssignableFrom(typeof(IFabricationInfo))) { var oFI = new FabInfo(); return (T)oFI; } if (targetInterface.IsAssignableFrom(typeof(ICoatingInfo))) { var oCI = new CoatInfo(); return (T)oCI; } return default(T); } } But getting a compiler error: Canot convert type 'GenericIntf.FabInfo' to T How do I fix this? thanks Sunit

    Read the article

  • IntelliJ Doesn't Notice Changes in Interface

    - by yar
    [I've decided to give IntelliJ another go (to replace Eclipse), since its Groovy support is supposed to be the best. But back to Java...] I have an Interface that defines a constant public static final int CHANNEL_IN = 1; and about 20 classes in my Module that implement that interface. I've decided that this constant was a bad idea so I did what I do in Eclipse: I deleted the entire line. This should cause the Project tree to light up like a Christmas tree and all classes that implement that interface and use that constant to break. Instead, this is not happening. If I don't actually double-click on the relevant classes -- which I find using grep -- the module even builds correctly (using Build - Make Module). If I double-click on a relevant class, the error is shown both in the Project Tree and in the Editor. I am not able to replicate this behavior in small tests, but in large modules it works (incorrectly) this way. Is there some relevant setting in IntelliJ for this?

    Read the article

  • C# -Fluent interface implementation Help

    - by nettguy
    I am implementing the following piece of code using Fluent Interface design in C# 3.0. The code is working fine. public interface ITrainable { ITrainable AddSkill(string _skill); } public interface ISearchSkill { ISearchSkill SearchSkill(SoftwareEngineer emp,string[] _skills); } public abstract class Person { public Person(){} protected string Name { get; set; } } public class SoftwareEngineer:Person,ITrainable { protected internal List<string> skillSet { get; set; } public SoftwareEngineer() { } public SoftwareEngineer(string name) { Name=name; skillSet = new List<string>(); } public ITrainable AddSkill(string _skill) { skillSet.Add(_skill); return this; } } public class HRExecutive :Person,ISearchSkill { SoftwareEngineer _employee; public HRExecutive() { _employee=new SoftwareEngineer(); } public ISearchSkill SearchSkill(SoftwareEngineer _employee,string[] skills) { this._employee= _employee; foreach (string _skill in skills) { if (_employee.skillSet.Contains(_skill)) { Console.WriteLine(Name + " is trained on " + _skill); } else { Console.WriteLine(Name + " is not trained on " + _skill); } } return this; } } Execution SoftwareEngineer emp1 = new SoftwareEngineer("JonSkeet"); emp1.AddSkill("java").AddSkill("C#").AddSkill("F#"); HRExecutive hr = new HRExecutive(); hr.SearchSkill(emp1, new string[] { "java", "C#" }). SearchSkill(emp1, new string[] { "Oracle", "F#" }); Question : I don't want the skillSet of SoftwareEngineer being accessed by some XXX class.It could be accessed by limited classes.But protected internal List<string> skillSet { get; set; } is the only option (i think) i can declare in order to access the skillSet from HRExecutive.If i do so other XXX class can still access it. How to rewrite the code to prevent it?

    Read the article

  • What is the purpose of unit testing an interface repository

    - by ahsteele
    I am unit testing an ICustomerRepository interface used for retrieving objects of type Customer. As a unit test what value am I gaining by testing the ICustomerRepository in this manner? Under what conditions would the below test fail? For tests of this nature is it advisable to do tests that I know should fail? i.e. look for id 4 when I know I've only placed 5 in the repository I am probably missing something obvious but it seems the integration tests of the class that implements ICustomerRepository will be of more value. [TestClass] public class CustomerTests : TestClassBase { private Customer SetUpCustomerForRepository() { return new Customer() { CustId = 5, DifId = "55", CustLookupName = "The Dude", LoginList = new[] { new Login { LoginCustId = 5, LoginName = "tdude" }, new Login { LoginCustId = 5, LoginName = "tdude2" } } }; } [TestMethod] public void CanGetCustomerById() { // arrange var customer = SetUpCustomerForRepository(); var repository = Stub<ICustomerRepository>(); // act repository.Stub(rep => rep.GetById(5)).Return(customer); // assert Assert.AreEqual(customer, repository.GetById(5)); } } Test Base Class public class TestClassBase { protected T Stub<T>() where T : class { return MockRepository.GenerateStub<T>(); } } ICustomerRepository and IRepository public interface ICustomerRepository : IRepository<Customer> { IList<Customer> FindCustomers(string q); Customer GetCustomerByDifID(string difId); Customer GetCustomerByLogin(string loginName); } public interface IRepository<T> { void Save(T entity); void Save(List<T> entity); bool Save(T entity, out string message); void Delete(T entity); T GetById(int id); ICollection<T> FindAll(); }

    Read the article

  • Best way to organize a Go interface

    - by Metropolis
    Hey Everyone, Its been a long time since I have programmed in C++, and if I remember correctly the best way to organize classes was to create your class in the .h file, and then your implementation in your .cpp file. Well I am trying to learn Go now and I was reading over the Go for C++ Programmers article when I came upon interfaces. The article explains that interfaces in Go essentially take the place of classes, and shows how to set them up pretty well. What I am trying to figure out though is how should I organize an interface into files? For instance, should the interface be in one file while the implementation is in another? myInterface.go type myInterface interface { get() int set(i int) } myImplementation.go type myType struct { i int } func (p *myType) set(i int) { p.i = i } func (p *myType) get() int { return p.i } My code here may be wrong since I do not completely know what I am doing yet (and if I am wrong please correct me), but would this be the best way to set this up? Im having a very hard time trying to wrap my head around how to organize code in Go so any help is appreciated! Metropolis

    Read the article

  • Remote interface lookup-problem in Glassfish3

    - by andersmo
    I have deployed a war-file, with actionclasses and a facade, and a jar-file with ejb-components (a stateless bean, a couple of entities and a persistence.xml) on glassfish3. My problem is that i cant find my remote interface to the stateless bean from my facade. My bean and interface looks like: @Remote public interface RecordService {... @Stateless(name="RecordServiceBean", mappedName="ejb/RecordServiceJNDI") public class RecordServiceImpl implements RecordService { @PersistenceContext(unitName="record_persistence_ctx") private EntityManager em;... and if i look in the server.log the portable jndi looks like: Portable JNDI names for EJB RecordServiceBean : [java:global/recordEjb/RecordServiceBean, java:global/recordEjb/RecordServiceBean!domain.service.RecordService]|#] and my facade: ...InitialContext ctx= new InitialContext(); try{ recordService = (RecordService) ctx.lookup("java:global/recordEjb/RecordServiceBean!domain.service.RecordService"); } catch(Throwable t){ System.out.println("ooops"); try{ recordService = (RecordService)ctx.lookup("java:global/recordEjb/RecordServiceImpl"); } catch(Throwable t2){ System.out.println("noooo!"); }... } and when the facade makes the first call this exception occur: javax.naming.NamingException: Lookup failed for 'java:global/recordEjb/RecordServiceBean!domain.service.RecordService' in SerialContext [Root exception is javax.naming.NamingException: ejb ref resolution error for remote business interfacedomain.service.RecordService [Root exception is java.lang.ClassNotFoundException: domain.service.RecordService]] and the second call: javax.naming.NamingException: Lookup failed for 'java:global/recordEjb/RecordServiceBean' in SerialContext [Root exception is javax.naming.NamingException: ejb ref resolution error for remote business interfacedomain.service.RecordService [Root exception is java.lang.ClassNotFoundException: domain.service.RecordService]] I have also tested to inject the bean with the @EJB-annotation: @EJB(name="RecordServiceBean") private RecordService recordService; But that doesnt work either. What have i missed? I tried with an ejb-jar.xml but that shouldnt be nessesary. Is there anyone who can tell me how to fix this problem?

    Read the article

  • interface variables are final and static by default and methods are public and abstract

    - by sap
    The question is why it's been decided to have variable as final and static and methods as public and abstract by default. Is there any particular reason for making them implicit,variable as final and static and methods as public and abstract. Why they are not allowing static method but allowing static variable? We have interface to have feature of multiple inheritance in Java and to avoid diamond problem. But how it solves diamond problem,since it does not allow static methods. In the following program, both interfaces have method with the same name..but while implementing only one we implement...is this how diamond problem is solved? interface testInt{ int m = 0; void testMethod(); } interface testInt1{ int m = 10; void testMethod(); } public class interfaceCheck implements testInt, testInt1{ public void testMethod(){ System . out . println ( "m is"+ testInt.m ); System . out . println ( "Hi World!" ); } }

    Read the article

  • Fortran pointer as an argument to interface procedure

    - by icarusthecow
    Im trying to use interfaces to call different subroutines with different types, however, it doesnt seem to work when i use the pointer attribute. for example, take this sample code MODULE ptr_types TYPE, abstract :: parent INTEGER :: q END TYPE TYPE, extends(parent) :: child INTEGER :: m END TYPE INTERFACE ptr_interface MODULE PROCEDURE do_something END INTERFACE CONTAINS SUBROUTINE do_something(atype) CLASS(parent), POINTER :: atype ! code determines that this allocation is correct from input ALLOCATE(child::atype) WRITE (*,*) atype%q END SUBROUTINE END MODULE PROGRAM testpass USE ptr_types CLASS(child), POINTER :: ctype CALL ptr_interface(ctype) END PROGRAM This gives error Error: There is no specific subroutine for the generic 'ptr_interface' at (1) however if i remove the pointer attribute in the subroutine it compiles fine. Now, normally this wouldnt be a problem, but for my use case i need to be able to treat that argument as a pointer, mainly so i can allocate it if necessary. Any suggestions? Mind you I'm new to fortran so I may have missed something edit: forgot to put the allocation in the parents subroutine, the initial input is unallocated EDIT 2 this is my second attempt, with caller side casting MODULE ptr_types TYPE, abstract :: parent INTEGER :: q END TYPE TYPE, extends(parent) :: child INTEGER :: m END TYPE TYPE, extends(parent) :: second INTEGER :: meow END TYPE CONTAINS SUBROUTINE do_something(this, type_num) CLASS(parent), POINTER :: this INTEGER type_num IF (type_num == 0) THEN ALLOCATE (child::this) ELSE IF (type_num == 1) THEN ALLOCATE (second::this) ENDIF END SUBROUTINE END MODULE PROGRAM testpass USE ptr_types CLASS(child), POINTER :: ctype SELECT TYPE(ctype) CLASS is (parent) CALL do_something(ctype, 0) END SELECT WRITE (*,*) ctype%q END PROGRAM however this still fails. in the select statement it complains that parent must extend child. Im sure this is due to restrictions when dealing with the pointer attribute, for type safety, however, im looking for a way to convert a pointer into its parent type for generic allocation. Rather than have to write separate allocation functions for every type and hope they dont collide in an interface or something. hopefully this example will illustrate a little more clearly what im trying to achieve, if you know a better way let me know

    Read the article

  • How string accepting interface should look like?

    - by ybungalobill
    Hello, This is a follow up of this question. Suppose I write a C++ interface that accepts or returns a const string. I can use a const char* zero-terminated string: void f(const char* str); // (1) The other way would be to use an std::string: void f(const string& str); // (2) It's also possible to write an overload and accept both: void f(const char* str); // (3) void f(const string& str); Or even a template in conjunction with boost string algorithms: template<class Range> void f(const Range& str); // (4) My thoughts are: (1) is not C++ish and may be less efficient when subsequent operations may need to know the string length. (2) is bad because now f("long very long C string"); invokes a construction of std::string which involves a heap allocation. If f uses that string just to pass it to some low-level interface that expects a C-string (like fopen) then it is just a waste of resources. (3) causes code duplication. Although one f can call the other depending on what is the most efficient implementation. However we can't overload based on return type, like in case of std::exception::what() that returns a const char*. (4) doesn't work with separate compilation and may cause even larger code bloat. Choosing between (1) and (2) based on what's needed by the implementation is, well, leaking an implementation detail to the interface. The question is: what is the preffered way? Is there any single guideline I can follow? What's your experience?

    Read the article

  • Best way to implement some type of ITaggable interface

    - by Jack
    I've got a program I'm creating that reports on another certain programs backup xml files. I've gotten to the point where I need to implement some type of ITaggable interface - but am unsure how to go about it code wise. My idea is that each item (BackupClient, BackupVersion, and BackupFile) should implement an ITaggable interface for highlighting old, out of date, or non-existent files in their HTML or Excel report. The user will be able to specify tags in the settings. My question is this, how can a user dynamically specify a "tag" such as File Date 3 days old? - Background Color = Red. Actually I guess my question is more, how can I, the programmer, implement this dynamically? I was thinking Expression trees, but am unsure this is the way to go as I havn't studied them much. I know my ITaggable interface would have methods such as AddTag(T tag), RemoveTag(T tag), but what exactly specifies the criteria for the tag to be added? I realize this may be subjective, and can be marked as wiki if need be, but I truly am stuck. Any input would be greatly helpful!

    Read the article

  • interface abstract in php real world scenario

    - by jason
    The goal is to learn whether to use abstract or interface or both... I'm designing a program which allows a user to de-duplicate all images but in the process rather then I just build classes I'd like to build a set of libraries that will allow me to re-use the code for other possible future purposes. In doing so I would like to learn interface vs abstract and was hoping someone could give me input into using either. Here is what the current program will do: recursive scan directory for all files determine file type is image type compare md5 checksum against all other files found and only keep the ones which are not duplicates Store total duplicates found at the end and display size taken up Copy files that are not duplicates into folder by date example Year, Month folder with filename is file creation date. While I could just create a bunch of classes I'd like to start learning more on interfaces and abstraction in php. So if I take the scan directory class as the first example I have several methods... ScanForFiles($path) FindMD5Checksum() FindAllImageTypes() getFileList() The scanForFiles could be public to allow anyone to access it and it stores in an object the entire directory list of files found and many details about them. example extension, size, filename, path, etc... The FindMD5Checksum runs against the fileList object created from scanForFiles and adds the md5 if needed. The FindAllImageTypes runs against the fileList object as well and adds if they are image types. The findmd5checksum and findallimagetypes are optionally run methods because the user might not intend to run these all the time or at all. The getFileList returns the fileList object itself. While I have a working copy I've revamped it a few times trying to figure out whether I need to go with an interface or abstract or both. I'd like to know how an expert OO developer would design it and why?

    Read the article

  • Connect to running web role on Azure using Remote Desktop Connection and VS2012

    - by Magnus Karlsson
    We want to be able to collect IntelliTrace information from our running app and also use remote desktop to connect to the IIS and look around(probably debugging). 1. Create certificate 1.1 Right-click the cloud project (marked in red) and select “Configure remote desktop”. 1.2 In the drop down list of certificates, choose <create> at the bottom. 1.3. Follow the instructions, you can set it up with default values. 1.4 When done. Choose the certificate and click “Copy to File…” as seen in the left of the picture above. 1.5. Save the file with any name you want. Now we will save it to local storage to be able to import it to our solution through the azure configuration manager in step 3. 2. Save certificate to local storage Now we need to attach it to our local certificate storage to be able to reach it from our confiuguration manager in visual studio. Microsoft provides the following steps for doing this: http://support.microsoft.com/kb/232137 In order to view the Certificates store on the local computer, perform the following steps: Click Start, and then click Run. Type "MMC.EXE" (without the quotation marks) and click OK. Click Console in the new MMC you created, and then click Add/Remove Snap-in. In the new window, click Add. Highlight the Certificates snap-in, and then click Add. Choose the Computer option and click Next. Select Local Computer on the next screen, and then click OK. Click Close , and then click OK. You have now added the Certificates snap-in, which will allow you to work with any certificates in your computer's certificate store. You may want to save this MMC for later use. Now that you have access to the Certificates snap-in, you can import the server certificate into you computer's certificate store by following these steps: Open the Certificates (Local Computer) snap-in and navigate to Personal, and then Certificates. Note: Certificates may not be listed. If it is not, that is because there are no certificates installed. Right-click Certificates (or Personal if that option does not exist.) Choose All Tasks, and then click Import. When the wizard starts, click Next. Browse to the PFX file you created containing your server certificate and private key. Click Next. Enter the password you gave the PFX file when you created it. Be sure the Mark the key as exportable option is selected if you want to be able to export the key pair again from this computer. As an added security measure, you may want to leave this option unchecked to ensure that no one can make a backup of your private key. Click Next, and then choose the Certificate Store you want to save the certificate to. You should select Personal because it is a Web server certificate. If you included the certificates in the certification hierarchy, it will also be added to this store. Click Next. You should see a summary of screen showing what the wizard is about to do. If this information is correct, click Finish. You will now see the server certificate for your Web server in the list of Personal Certificates. It will be denoted by the common name of the server (found in the subject section of the certificate). Now that you have the certificate backup imported into the certificate store, you can enable Internet Information Services 5.0 to use that certificate (and the corresponding private key). To do this, perform the following steps: Open the Internet Services Manager (under Administrative Tools) and navigate to the Web site you want to enable secure communications (SSL/TLS) on. Right-click on the site and click Properties. You should now see the properties screen for the Web site. Click the Directory Security tab. Under the Secure Communications section, click Server Certificate. This will start the Web Site Certificate Wizard. Click Next. Choose the Assign an existing certificate option and click Next. You will now see a screen showing that contents of your computer's personal certificate store. Highlight your Web server certificate (denoted by the common name), and then click Next. You will now see a summary screen showing you all the details about the certificate you are installing. Be sure that this information is correct or you may have problems using SSL or TLS in HTTP communications. Click Next, and then click OK to exit the wizard. You should now have an SSL/TLS-enabled Web server. Be sure to protect your PFX files from any unwanted personnel. Image of a typical MMC.EXE with the certificates up.   3. Import the certificate to you visual studio project. 3.1 Now right click your equivalent to the MvcWebRole1 (as seen in the first picture under the red oval) and choose properties. 3.2 Choose Certificates. Right click the ellipsis to the right of the “thumbprint” and you should be able to select your newly created certificate here. After selecting it- save the file.   4. Upload the certificate to your Azure subscription. 4.1 Go to the azure management portal, click the services menu icon to the left and choose the service. Click Upload in the bottom menu.     5. Connect to server. Since I tried to use account settings(have to use another name) we have to set up a new name for the connection. No biggie. 5.1 Go to azure management portal, select your service and in the bottom menu, choose “REMOTE”. This will display the configuration for remote connection. It will actually change your ServiceConfiguration.cscfg file. After you change It here it might be good to choose download and replace the one in your project. Set a name that is not your windows azure account name and not Administrator. 5.2 Goto visual studio, click Server Explorer. Choose as selected in the picture below and click “COnnect using remote desktop”.   5.2 You will now be able to log in with the name and password set up in step 5.1. and voila! Windows server 2012, IIS and other nice stuff!   To do this one I’ve been using http://msdn.microsoft.com/en-us/library/windowsazure/ff683671.aspx where you can collect some of this information and additional one.

    Read the article

  • Why can't I build Deluge?

    - by hugemeow
    Deluge is a BitTorrent Client. I am trying to build it from source, since I don't have privilege to install it as root. I am using python setup.py build. But, it failed following message, why? copying deluge/ui/web/themes/images/gray/slider/slider-v-thumb.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/gray/slider copying deluge/ui/web/themes/images/gray/slider/slider-thumb.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/gray/slider copying deluge/ui/web/themes/images/gray/panel/top-bottom.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/gray/panel copying deluge/ui/web/themes/images/gray/tabs/tab-strip-bg.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/gray/tabs copying deluge/ui/web/themes/images/yourtheme/window/right-corners.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/yourtheme/window copying deluge/ui/web/themes/images/yourtheme/window/left-corners.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/yourtheme/window copying deluge/ui/web/themes/images/yourtheme/window/left-right.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/yourtheme/window copying deluge/ui/web/themes/images/yourtheme/window/top-bottom.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/yourtheme/window creating build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/yourtheme/slider copying deluge/ui/web/themes/images/yourtheme/slider/slider-v-thumb.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/yourtheme/slider copying deluge/ui/web/themes/images/yourtheme/slider/slider-thumb.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/yourtheme/slider copying deluge/ui/web/themes/images/yourtheme/slider/slider-bg.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/yourtheme/slider copying deluge/ui/web/themes/images/yourtheme/slider/slider-v-bg.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/yourtheme/slider copying deluge/ui/web/themes/images/yourtheme/panel/top-bottom.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/yourtheme/panel copying deluge/ui/web/themes/images/yourtheme/grid/hmenu-lock.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/yourtheme/grid copying deluge/ui/web/themes/images/yourtheme/grid/hmenu-unlock.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/yourtheme/grid copying deluge/ui/web/themes/images/yourtheme/tabs/tab-strip-bg.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/yourtheme/tabs running build_ext building 'libtorrent' extension gcc -pthread -shared -L/usr/lib64 -L/opt/local/lib -lboost_filesystem -lboost_date_time -lboost_iostreams -lboost_python -lboost_thread -lpthread -lssl -lz -o build/lib.linux-x86_64-2.4/deluge/libtorrent.so /usr/bin/ld: cannot find -lboost_filesystem collect2: ld returned 1 exit status error: command 'gcc' failed with exit status 1 [mirror@innov deluge-1.3.5]$ echo $? 1 Edit 1: gcc version and os information $(which gcc) --version gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-52) Copyright (C) 2006 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. cat /etc/issue CentOS release 5.7 (Final) Kernel \r on an \m Edit 2: boost is referenced by setup.py in deluge 114 if OS == "linux": 115 if os.path.exists(os.path.join(sysconfig.get_config_vars()['LIBDIR'], \ 116 'libboost_filesystem-mt.so')): 117 boost_filesystem = "boost_filesystem-mt" 118 elif os.path.exists(os.path.join(sysconfig.get_config_vars()['LIBDIR'], \ 119 'libboost_filesystem.so')): 120 boost_filesystem = "boost_filesystem" 121 if os.path.exists(os.path.join(sysconfig.get_config_vars()['LIBDIR'], \ 122 'libboost_date_time-mt.so')): 123 boost_date_time = "boost_date_time-mt" 124 elif os.path.exists(os.path.join(sysconfig.get_config_vars()['LIBDIR'], \ 125 'libboost_date_time.so')): 126 boost_date_time = "boost_date_time" 127 if os.path.exists(os.path.join(sysconfig.get_config_vars()['LIBDIR'], \ 128 'libboost_thread-mt.so')): 129 boost_thread = "boost_thread-mt" 130 elif os.path.exists(os.path.join(sysconfig.get_config_vars()['LIBDIR'], \ 131 'libboost_thread.so')): 132 boost_thread = "boost_thread" 133 134 if 'boost_filesystem' not in vars(): 135 boost_filesystem = "boost_filesystem-mt" 136 if 'boost_date_time' not in vars(): 137 boost_date_time = "boost_date_time-mt" 138 if 'boost_thread' not in vars(): 139 boost_thread = "boost_thread-mt" 140 141 elif OS == "freebsd": 142 boost_filesystem = "boost_filesystem" 143 boost_date_time = "boost_date_time" 144 boost_thread = "boost_thread" 145 else: 146 boost_filesystem = "boost_filesystem-mt" 147 boost_date_time = "boost_date_time-mt" 148 boost_thread = "boost_thread-mt" 149 150 librariestype = [boost_filesystem, boost_date_time, 151 boost_thread, 'z', 'pthread', 'ssl', 'crypto']

    Read the article

  • how does openvpn decide which interface to get IP addrs from

    - by bkrupa
    Using ubuntu 10.04 on both ends. We have a client and server machine on the SAME network attempting to make a vpn connection. We use the config files from here and made minimal changes. The server and client start and seem to connect without any trouble. The server looks like: Wed Feb 23 22:13:22 2011 MULTI: multi_create_instance called Wed Feb 23 22:13:22 2011 192.168.1.55:47166 Re-using SSL/TLS context Wed Feb 23 22:13:22 2011 192.168.1.55:47166 LZO compression initialized Wed Feb 23 22:13:22 2011 192.168.1.55:47166 Control Channel MTU parms [ L:1574 D:138 EF:38 EB:0 ET:0 EL:0 ] Wed Feb 23 22:13:22 2011 192.168.1.55:47166 Data Channel MTU parms [ L:1574 D:1450 EF:42 EB:135 ET:32 EL:0 AF:3/1 ] Wed Feb 23 22:13:22 2011 192.168.1.55:47166 Local Options hash (VER=V4): 'f7df56b8' Wed Feb 23 22:13:22 2011 192.168.1.55:47166 Expected Remote Options hash (VER=V4): 'd79ca330' Wed Feb 23 22:13:22 2011 192.168.1.55:47166 TLS: Initial packet from 192.168.1.55:47166, sid=69112e42 5458135b *...* Wed Feb 23 22:13:22 2011 192.168.1.55:47166 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA Wed Feb 23 22:13:22 2011 192.168.1.55:47166 [client1] Peer Connection Initiated with 192.168.1.55:47166 On the client side the connection looks like: Wed Feb 23 22:20:07 2011 [server] Peer Connection Initiated with [AF_INET]192.168.1.41:1194 Wed Feb 23 22:20:10 2011 SENT CONTROL [server]: 'PUSH_REQUEST' (status=1) Wed Feb 23 22:20:10 2011 PUSH: Received control message: 'PUSH_REPLY,route-gateway 10.8.0.4,ping 10,ping-restart 120,ifconfig 10.8.0.50 255.255.255.0' ... Wed Feb 23 22:20:10 2011 /sbin/ifconfig tap0 10.8.0.50 netmask 255.255.255.0 mtu 1500 broadcast 10.8.0.255 Wed Feb 23 22:20:10 2011 Initialization Sequence Completed The openvpn server has been configured to assign ip addresses in the range 10.8.0.* and the client has been given 10.8.0.50. When I run the following nmap from the client: Starting Nmap 5.00 ( http://nmap.org ) at 2011-02-23 22:04 EST Host 10.8.0.50 is up (0.00047s latency). Nmap done: 256 IP addresses (1 host up) scanned in 30.34 seconds Host 192.168.1.1 is up (0.0025s latency). Host 192.168.1.18 is up (0.074s latency). Host 192.168.1.41 is up (0.0024s latency). Host 192.168.1.55 is up (0.00018s latency). Nmap done: 256 IP addresses (4 hosts up) scanned in 6.33 seconds If I run an nmap from the server on 10.8.0.* I get nothing. If the client has two interfaces (wireless and tap device) when you look for a certain ip address, how does it decide which interface to connect on? edit I am trying to set up a vpn so that I can connect to my home network from a remote network. It seems like openvpn is connecting but none of the computers on my home network appear as network machines even after the connection is "Established". Stripped versions of the client and server config files are posted below. Thanks for any help you can offer. server.conf port 1194 proto udp dev tap ca /etc/openvpn/easy-rsa/keys/ca.crt cert /etc/openvpn/easy-rsa/keys/server.crt key /etc/openvpn/easy-rsa/keys/server.key # This file should be kept secret dh /etc/openvpn/easy-rsa/keys/dh1024.pem ifconfig-pool-persist ipp.txt server-bridge 10.8.0.4 255.255.255.0 10.8.0.50 10.8.0.100 keepalive 10 120 comp-lzo persist-key persist-tun status openvpn-status.log verb 3 client.conf client dev tap dev-node tap0901 proto udp remote ********** 1194 resolv-retry infinite nobind persist-key persist-tun ca ca.crt cert client1.crt key client1.key comp-lzo verb 3 one other thing that might be helpful, I tried to connect using the openvpn gui for windows and the connection stalls out on "obtaining configuration" and the bar just scrolls forever.

    Read the article

  • unable to sniff traffic despite network interface being in monitor or promiscuous mode

    - by user65126
    I'm trying to sniff out my network's wireless traffic but am having issues. I'm able to put the card in monitor mode, but am unable to see any traffic except broadcasts, multicasts and probe/beacon frames. I have two network interfaces on this laptop. One is connected normally to 'linksys' and the other is in monitor mode. The interface in monitor mode is on the right channel. I'm not associated with the access point because, as I understand, I don't need to if using monitor mode (vs promiscuous). When I try to ping the router ip, I'm not seeing that traffic show up in wireshark. Here's my ifconfig settings: daniel@seasonBlack:~$ ifconfig eth0 Link encap:Ethernet HWaddr 00:1f:29:9e:b2:89 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:16 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:112 errors:0 dropped:0 overruns:0 frame:0 TX packets:112 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:8518 (8.5 KB) TX bytes:8518 (8.5 KB) wlan0 Link encap:Ethernet HWaddr 00:21:00:34:f7:f4 inet addr:192.168.1.116 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::221:ff:fe34:f7f4/64 Scope:Link UP BROADCAST RUNNING MTU:1500 Metric:1 RX packets:9758 errors:0 dropped:0 overruns:0 frame:0 TX packets:4869 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3291516 (3.2 MB) TX bytes:677386 (677.3 KB) wlan1 Link encap:UNSPEC HWaddr 00-02-72-7B-92-53-33-34-00-00-00-00-00-00-00-00 UP BROADCAST NOTRAILERS PROMISC ALLMULTI MTU:1500 Metric:1 RX packets:112754 errors:0 dropped:0 overruns:0 frame:0 TX packets:101 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:18569124 (18.5 MB) TX bytes:12874 (12.8 KB) wmaster0 Link encap:UNSPEC HWaddr 00-21-00-34-F7-F4-00-00-00-00-00-00-00-00-00-00 UP RUNNING MTU:0 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) wmaster1 Link encap:UNSPEC HWaddr 00-02-72-7B-92-53-00-00-00-00-00-00-00-00-00-00 UP RUNNING MTU:0 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Here's my iwconfig settings: daniel@seasonBlack:~$ iwconfig lo no wireless extensions. eth0 no wireless extensions. wmaster0 no wireless extensions. wlan0 IEEE 802.11bg ESSID:"linksys" Mode:Managed Frequency:2.437 GHz Access Point: 00:18:F8:D6:17:34 Bit Rate=54 Mb/s Tx-Power=27 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=68/70 Signal level=-42 dBm Noise level=-69 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 wmaster1 no wireless extensions. wlan1 IEEE 802.11bg Mode:Monitor Frequency:2.437 GHz Tx-Power=27 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality:0 Signal level:0 Noise level:0 Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 Here's how I know I'm on the right channel: daniel@seasonBlack:~$ iwlist channel lo no frequency information. eth0 no frequency information. wmaster0 no frequency information. wlan0 11 channels in total; available frequencies : Channel 01 : 2.412 GHz Channel 02 : 2.417 GHz Channel 03 : 2.422 GHz Channel 04 : 2.427 GHz Channel 05 : 2.432 GHz Channel 06 : 2.437 GHz Channel 07 : 2.442 GHz Channel 08 : 2.447 GHz Channel 09 : 2.452 GHz Channel 10 : 2.457 GHz Channel 11 : 2.462 GHz Current Frequency=2.437 GHz (Channel 6) wmaster1 no frequency information. wlan1 11 channels in total; available frequencies : Channel 01 : 2.412 GHz Channel 02 : 2.417 GHz Channel 03 : 2.422 GHz Channel 04 : 2.427 GHz Channel 05 : 2.432 GHz Channel 06 : 2.437 GHz Channel 07 : 2.442 GHz Channel 08 : 2.447 GHz Channel 09 : 2.452 GHz Channel 10 : 2.457 GHz Channel 11 : 2.462 GHz Current Frequency=2.437 GHz (Channel 6)

    Read the article

  • spring web application context is not loaded from jar file in WEB-INF/lib when running tomcat in eclipse

    - by Remy J
    I am experimenting with spring, maven, and eclipse but stumbling on a weird issue. I am running Eclipse Helios SR1 with the STS (Spring tools suite) plugin which includes the Maven plugin also. What i want to achieve is a spring mvc webapp which uses an application context loaded from a local application context xml file, but also from other application contexts in jar files dependencies included in WEB-INF/lib. What i'd ultimately like to do is have my persistence layer separated in its own jar file but containing its own spring context files with persistence specific configuration (e.g a jpa entityManagerFactory for example). So to experiment with loading resources from jar dependencies, i created a simple maven project from eclipse, which defines an applicationContext.xml file in src/main/resources Inside, i define a bean <bean id="mybean" class="org.test.MyClass" /> and create the class in the org.test package I run mvn-install from eclipse, which generates me a jar file containing my class and the applicationContext.xml file: testproj.jar |_META-INF |_org |_test |_MyClass.class |_applicationContext.xml I then create a spring mvc project from the Spring template projects provided by STS. I have configured an instance of Tomcat 7.0.8 , and also an instance of springSource tc Server within eclipse. Deploying the newly created project on both servers works without problem. I then add my previous project as a maven dependency of the mvc project. the jar file is correctly added in the Maven Dependencies of the project. In the web.xml that is generated, i now want to load the applicationContext.xml from the jar file as well as the existing one generated for the project. My web.xml now looks like this: org.springframework.web.context.ContextLoaderListener <!-- Processes application requests --> <servlet> <servlet-name>appServlet</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <init-param> <param-name>contextConfigLocation</param-name> <param-value> classpath*:applicationContext.xml, /WEB-INF/spring/appServlet/servlet-context.xml </param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>appServlet</servlet-name> <url-pattern>/</url-pattern> </servlet-mapping> Also, in my servlet-context.xml, i have the following: <context:component-scan base-package="org.test" /> <context:component-scan base-package="org.remy.mvc" /> to load classes from the jar spring context (org.test) and to load controllers from the mvc app context. I also change one of my controllers in org.remy.mvc to autowire MyClass to verify that loading the context has worked as intended. public class MyController { @Autowired private MyClass myClass; public void setMyClass(MyClass myClass) { this.myClass = myClass; } public MyClass getMyClass() { return myClass; } [...] } Now this is the weird bit: If i deploy the spring mvc web on my tomcat instance inside eclipse (run on server...) I get the following error : org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping#0': Initialization of bean failed; nested exception is java.lang.NoClassDefFoundError: org/test/MyClass at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:527) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:291) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:288) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:190) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:580) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:895) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:425) at org.springframework.web.servlet.FrameworkServlet.createWebApplicationContext(FrameworkServlet.java:442) at org.springframework.web.servlet.FrameworkServlet.createWebApplicationContext(FrameworkServlet.java:458) at org.springframework.web.servlet.FrameworkServlet.initWebApplicationContext(FrameworkServlet.java:339) at org.springframework.web.servlet.FrameworkServlet.initServletBean(FrameworkServlet.java:306) at org.springframework.web.servlet.HttpServletBean.init(HttpServletBean.java:127) at javax.servlet.GenericServlet.init(GenericServlet.java:160) at org.apache.catalina.core.StandardWrapper.initServlet(StandardWrapper.java:1133) at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1087) at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:996) at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:4834) at org.apache.catalina.core.StandardContext$3.call(StandardContext.java:5155) at org.apache.catalina.core.StandardContext$3.call(StandardContext.java:5150) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907) at java.lang.Thread.run(Thread.java:619) Caused by: java.lang.NoClassDefFoundError: org/test/MyClass at java.lang.Class.getDeclaredMethods0(Native Method) at java.lang.Class.privateGetDeclaredMethods(Class.java:2427) at java.lang.Class.getDeclaredMethods(Class.java:1791) at org.springframework.util.ReflectionUtils.doWithMethods(ReflectionUtils.java:446) at org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping.determineUrlsForHandlerMethods(DefaultAnnotationHandlerMapping.java:172) at org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping.determineUrlsForHandler(DefaultAnnotationHandlerMapping.java:118) at org.springframework.web.servlet.handler.AbstractDetectingUrlHandlerMapping.detectHandlers(AbstractDetectingUrlHandlerMapping.java:79) at org.springframework.web.servlet.handler.AbstractDetectingUrlHandlerMapping.initApplicationContext(AbstractDetectingUrlHandlerMapping.java:58) at org.springframework.context.support.ApplicationObjectSupport.initApplicationContext(ApplicationObjectSupport.java:119) at org.springframework.web.context.support.WebApplicationObjectSupport.initApplicationContext(WebApplicationObjectSupport.java:72) at org.springframework.context.support.ApplicationObjectSupport.setApplicationContext(ApplicationObjectSupport.java:73) at org.springframework.context.support.ApplicationContextAwareProcessor.invokeAwareInterfaces(ApplicationContextAwareProcessor.java:106) at org.springframework.context.support.ApplicationContextAwareProcessor.postProcessBeforeInitialization(ApplicationContextAwareProcessor.java:85) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsBeforeInitialization(AbstractAutowireCapableBeanFactory.java:394) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1413) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:519) ... 25 more 20-Feb-2011 10:54:53 org.apache.catalina.core.ApplicationContext log SEVERE: StandardWrapper.Throwable org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping#0': Initialization of bean failed; nested exception is java.lang.NoClassDefFoundError: org/test/MyClass If i build the war file (using maven "install" goal), and then deploy that war file in the webapps directory of a standalone tomcat server (7.0.8 as well) it WORKS :-( What am i missing ? Thanks for the help.

    Read the article

  • Turn class "Interfaceable"

    - by scooterman
    Hi folks, On my company system, we use a class to represent beans. It is just a holder of information using boost::variant and some serialization/deserialization stuff. It works well, but we have a problem: it is not over an interface, and since we use modularization through dlls, building an interface for it is getting very complicated, since it is used in almost every part of our app, and sadly interfaces (abstract classes ) on c++ have to be accessed through pointers, witch makes almost impossible to refactor the entire system. Our structure is: dll A: interface definition through abstract class dll B: interface implementation there is a painless way to achieve that (maybe using templates, I don't know) or I should forget about making this work and simply link everything with dll B? thanks Edit: Here is my example. this is on dll A BeanProtocol is a holder of N dataprotocol itens, wich are acessed by a index. class DataProtocol; class UTILS_EXPORT BeanProtocol { public: virtual DataProtocol& get(const unsigned int ) const { throw std::runtime_error("Not implemented"); } virtual void getFields(std::list<unsigned int>&) const { throw std::runtime_error("Not implemented"); } virtual DataProtocol& operator[](const unsigned int ) { throw std::runtime_error("Not implemented"); } virtual DataProtocol& operator[](const unsigned int ) const { throw std::runtime_error("Not implemented"); } virtual void fromString(const std::string&) { throw std::runtime_error("Not implemented"); } virtual std::string toString() const { throw std::runtime_error("Not implemented"); } virtual void fromBinary(const std::string&) { throw std::runtime_error("Not implemented"); } virtual std::string toBinary() const { throw std::runtime_error("Not implemented"); } virtual BeanProtocol& operator=(const BeanProtocol&) { throw std::runtime_error("Not implemented"); } virtual bool operator==(const BeanProtocol&) const { throw std::runtime_error("Not implemented"); } virtual bool operator!=(const BeanProtocol&) const { throw std::runtime_error("Not implemented"); } virtual bool operator==(const char*) const { throw std::runtime_error("Not implemented"); } virtual bool hasKey(unsigned int field) const { throw std::runtime_error("Not implemented"); } }; the other class (named GenericBean) implements it. This is the only way I've found to make this work, but now I want to turn it in a truly interface and remove the UTILS_EXPORT (which is an _declspec macro), and finally remove the forced linkage of B with A.

    Read the article

  • O&rsquo;Reilly Deal of the Day 14/Aug/2014 - RESTful Web APIs

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2014/08/14/orsquoreilly-deal-of-the-day-14aug2014---restful-web-apis.aspxToday’s half-price Deal of the Day from O’Reilly at http://shop.oreilly.com/product/0636920028468.do?code=DEAL is RESTful Web APIs. “The popularity of REST in recent years has led to tremendous growth in almost-RESTful APIs that don’t include many of the architecture’s benefits. With this practical guide, you’ll learn what it takes to design usable REST APIs that evolve over time. By focusing on solutions that cross a variety of domains, this book shows you how to create powerful and secure applications, using the tools designed for the world’s most successful distributed computing system: the World Wide Web.”

    Read the article

  • Webcast with Brian Griffin, Ancestry, 2013 Winner 10 Best Web Support Sites

    - by Tuula Fai
    The web is one of the fastest growing channels for providing service, support and information, as seen in The Service Council's (TSC) latest multi-channel research survey. Join TSC's Chief Customer Officer Sumair Dutta as he shares key findings from his current customer experience research from over 200 organizations. Sumair will be joined by Brian Griffin, Senior Program Manager, Global Support Experience, Ancestry.com who will show how Ancestry is using the web as a powerful tool to enhance self-service opportunities and increase customer engagement. Smarter Web Service Educast Thursday, November 14th 2 pm ET / 11 am PT Register: http://bit.ly/1cwz4Ns  

    Read the article

  • Setting up a VPN connection to Amazon VPC - routing

    - by Keeno
    I am having some real issues setting up a VPN between out office and AWS VPC. The "tunnels" appear to be up, however I don't know if they are configured correctly. The device I am using is a Netgear VPN Firewall - FVS336GV2 If you see in the attached config downloaded from VPC (#3 Tunnel Interface Configuration), it gives me some "inside" addresses for the tunnel. When setting up the IPsec tunnels do I use the inside tunnel IP's (e.g. 169.254.254.2/30) or do I use my internal network subnet (10.1.1.0/24) I have tried both, when I tried the local network (10.1.1.x) the tracert stops at the router. When I tried with the "inside" ips, the tracert to the amazon VPC (10.0.0.x) goes out over the internet. this all leads me to the next question, for this router, how do I set up stage #4, the static next hop? What are these seemingly random "inside" addresses and where did amazon generate them from? 169.254.254.x seems odd? With a device like this, is the VPN behind the firewall? I have tweaked any IP addresses below so that they are not "real". I am fully aware, this is probably badly worded. Please if there is any further info/screenshots that will help, let me know. Amazon Web Services Virtual Private Cloud IPSec Tunnel #1 ================================================================================ #1: Internet Key Exchange Configuration Configure the IKE SA as follows - Authentication Method : Pre-Shared Key - Pre-Shared Key : --- - Authentication Algorithm : sha1 - Encryption Algorithm : aes-128-cbc - Lifetime : 28800 seconds - Phase 1 Negotiation Mode : main - Perfect Forward Secrecy : Diffie-Hellman Group 2 #2: IPSec Configuration Configure the IPSec SA as follows: - Protocol : esp - Authentication Algorithm : hmac-sha1-96 - Encryption Algorithm : aes-128-cbc - Lifetime : 3600 seconds - Mode : tunnel - Perfect Forward Secrecy : Diffie-Hellman Group 2 IPSec Dead Peer Detection (DPD) will be enabled on the AWS Endpoint. We recommend configuring DPD on your endpoint as follows: - DPD Interval : 10 - DPD Retries : 3 IPSec ESP (Encapsulating Security Payload) inserts additional headers to transmit packets. These headers require additional space, which reduces the amount of space available to transmit application data. To limit the impact of this behavior, we recommend the following configuration on your Customer Gateway: - TCP MSS Adjustment : 1387 bytes - Clear Don't Fragment Bit : enabled - Fragmentation : Before encryption #3: Tunnel Interface Configuration Your Customer Gateway must be configured with a tunnel interface that is associated with the IPSec tunnel. All traffic transmitted to the tunnel interface is encrypted and transmitted to the Virtual Private Gateway. The Customer Gateway and Virtual Private Gateway each have two addresses that relate to this IPSec tunnel. Each contains an outside address, upon which encrypted traffic is exchanged. Each also contain an inside address associated with the tunnel interface. The Customer Gateway outside IP address was provided when the Customer Gateway was created. Changing the IP address requires the creation of a new Customer Gateway. The Customer Gateway inside IP address should be configured on your tunnel interface. Outside IP Addresses: - Customer Gateway : 217.33.22.33 - Virtual Private Gateway : 87.222.33.42 Inside IP Addresses - Customer Gateway : 169.254.254.2/30 - Virtual Private Gateway : 169.254.254.1/30 Configure your tunnel to fragment at the optimal size: - Tunnel interface MTU : 1436 bytes #4: Static Routing Configuration: To route traffic between your internal network and your VPC, you will need a static route added to your router. Static Route Configuration Options: - Next hop : 169.254.254.1 You should add static routes towards your internal network on the VGW. The VGW will then send traffic towards your internal network over the tunnels. IPSec Tunnel #2 ================================================================================ #1: Internet Key Exchange Configuration Configure the IKE SA as follows - Authentication Method : Pre-Shared Key - Pre-Shared Key : --- - Authentication Algorithm : sha1 - Encryption Algorithm : aes-128-cbc - Lifetime : 28800 seconds - Phase 1 Negotiation Mode : main - Perfect Forward Secrecy : Diffie-Hellman Group 2 #2: IPSec Configuration Configure the IPSec SA as follows: - Protocol : esp - Authentication Algorithm : hmac-sha1-96 - Encryption Algorithm : aes-128-cbc - Lifetime : 3600 seconds - Mode : tunnel - Perfect Forward Secrecy : Diffie-Hellman Group 2 IPSec Dead Peer Detection (DPD) will be enabled on the AWS Endpoint. We recommend configuring DPD on your endpoint as follows: - DPD Interval : 10 - DPD Retries : 3 IPSec ESP (Encapsulating Security Payload) inserts additional headers to transmit packets. These headers require additional space, which reduces the amount of space available to transmit application data. To limit the impact of this behavior, we recommend the following configuration on your Customer Gateway: - TCP MSS Adjustment : 1387 bytes - Clear Don't Fragment Bit : enabled - Fragmentation : Before encryption #3: Tunnel Interface Configuration Outside IP Addresses: - Customer Gateway : 217.33.22.33 - Virtual Private Gateway : 87.222.33.46 Inside IP Addresses - Customer Gateway : 169.254.254.6/30 - Virtual Private Gateway : 169.254.254.5/30 Configure your tunnel to fragment at the optimal size: - Tunnel interface MTU : 1436 bytes #4: Static Routing Configuration: Static Route Configuration Options: - Next hop : 169.254.254.5 You should add static routes towards your internal network on the VGW. The VGW will then send traffic towards your internal network over the tunnels. EDIT #1 After writing this post, I continued to fiddle and something started to work, just not very reliably. The local IPs to use when setting up the tunnels where indeed my network subnets. Which further confuses me over what these "inside" IP addresses are for. The problem is, results are not consistent what so ever. I can "sometimes" ping, I can "sometimes" RDP using the VPN. Sometimes, Tunnel 1 or Tunnel 2 can be up or down. When I came back into work today, Tunnel 1 was down, so I deleted it and re-created it from scratch. Now I cant ping anything, but Amazon AND the router are telling me tunnel 1/2 are fine. I guess the router/vpn hardware I have just isnt up to the job..... EDIT #2 Now Tunnel 1 is up, Tunnel 2 is down (I didn't change any settings) and I can ping/rdp again. EDIT #3 Screenshot of route table that the router has built up. Current state (tunnel 1 still up and going string, 2 is still down and wont re-connect)

    Read the article

  • Data Web Controls Enhancements in ASP.NET 4.0

    Traditionally, developers using Web controls enjoyed increased productivity but at the cost of control over the rendered markup. For instance, many ASP.NET controls automatically wrap their content in <code>&lt;table&gt;</code> for layout or styling purposes. This behavior runs counter to the web standards that have evolved over the past several years, which favor cleaner, terser HTML; sparing use of tables; and <a href="http://en.wikipedia.org/wiki/Cascading_Style_Sheets">Cascading Style Sheets (CSS)</a> for layout and styling. Furthermore, the <code>&lt;table&gt;</code> elements and other automatically-added content makes it harder to both style the Web controls using CSS and to work with the controls from client-side

    Read the article

  • Webcast MSDN: Introducción a páginas Web ASP.NET con Razor Syntax

    - by carlone
    Estimados Amigo@s: Mañana tendré el gusto de estar compartiendo nuevamente con ustedes un webcast. Estan invitados:   Id. de evento: 1032487341 Moderador(es): Carlos Augusto Lone Saenz. Idiomas: Español. Productos: Microsoft ASP.NET y Microsoft SQL Server. Público: Programador/desarrollador de programas. Venga y aprenda en esta sesión, sobre el nuevo modelo de programación simplificado, nueva sintaxis y ayudantes para web que componen las páginas Web ASP.NET con 'Razor'. Esta nueva forma de construir aplicaciones ASP.NET se dirige directamente a los nuevos desarrolladores de la plataforma. NET y desarrolladores, tratando de crear aplicaciones web rápidamente. También se incluye SQL Compact, embedded database que es xcopy de implementar. Vamos a mostrar una nueva funcionalidad que se ha agregado recientemente, incluyendo un package manager que hace algo fácil el agregar bibliotecas de terceros para sus aplicaciones. Registrarse aqui: https://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032487341&Culture=es-AR

    Read the article

  • Any Recommendations for a Web Based Large File Transfer System?

    - by Glen Richards
    I'm looking for a server software product that: Allows my users to share large files with: The general public securely to 1 or more people (notification via email, optionally with a token that gives them x period of time to download) Allows anyone in the general public to share files with my users. Perhaps by invitation. Has to be user friendly enough to allow my users to use this with out having to bug me as the admin. It needs to be a system that we can install on our own server (we don't want shared data sitting on anyone else's server) A web based solution. Using some kind or secure comms channel would be good too, eg, ssh Files to share could be over 1 GB. I found the question below. WebDav does not sound user friendly enough: http://serverfault.com/questions/86878/recommendations-for-a-secure-and-simple-dropbox-system I've done a lot of searching, but I can't get the search terms right. There are too many services that provide this, but I want something we can install on our own server. A last resort would be to roll my own. Any ideas appreciated. Glen EDIT Sorry Tom and Jeff but Glen specifically says that he's looking for a 'product' so given that I specialise in this field thought that my expertise in this area may have been of use to him. I don't see how him writing services is going to be easy for him to maintain going forward (large IT admin overhead) or simple for his users and the general public to work with.

    Read the article

  • AppFabric OutputCaching for ASP.NET Web API

    - by cibrax
    ASP.NET Web API does not provide any output caching capabilities out of the box other than the ones you would traditionally find in the ASP.NET caching module. Fortunately, Filip wrote a very nice library that you can use to decorate your Web API controller methods with an [OutputCaching] attribute, which is similar to the one you can find in ASP.NET MVC. This library provides a way to configure different persistence storages for the cached data, which uses memory by default. As part of this post, I will show how you can implement your own persistence provider for AppFabric in order to support distributed caching on web applications running on premises. Read more here  

    Read the article

< Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >