Search Results

Search found 5769 results on 231 pages for 'wcf routing'.

Page 200/231 | < Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >

  • Dealing with coworkers when developing, need advice [closed]

    - by Yippie-Kai-Yay
    I developed our current project architecture and started developing it on my own (reaching something like, revision 40). We're developing a simple subway routing framework and my design seemed to be done extremely well - several main models, corresponding views, main logic and data structures were modeled "as they should be" and fully separated from rendering, algorithmic part was also implemented apart from the main models and had a minor number of intersection points. I would call that design scalable, customizable, easy-to-implement, interacting mostly based on the "black box interaction" and, well, very nice. Now, what was done: I started some implementations of the corresponding interfaces, ported some convenient libraries and wrote implementation stubs for some application parts. I had the document describing coding style and examples of that coding style usage (my own written code). I forced the usage of more or less modern C++ development techniques, including no-delete code (wrapped via smart pointers) and etc. I documented the purpose of concrete interface implementations and how they should be used. Unit tests (mostly, integration tests, because there wasn't a lot of "actual" code) and a set of mocks for all the core abstractions. I was absent for 12 days. What do we have now (the project was developed by 4 other members of the team): 3 different coding styles all over the project (I guess, two of them agreed to use the same style :), same applies to the naming of our abstractions (e.g CommonPathData.h, SubwaySchemeStructures.h), which are basically headers declaring some data structures. Absolute lack of documentation for the recently implemented parts. What I could recently call a single-purpose-abstraction now handles at least 2 different types of events, has tight coupling with other parts and so on. Half of the used interfaces now contain member variables (sic!). Raw pointer usage almost everywhere. Unit tests disabled, because "(Rev.57) They are unnecessary for this project". ... (that's probably not everything). Commit history shows that my design was interpreted as an overkill and people started combining it with personal bicycles and reimplemented wheels and then had problems integrating code chunks. Now - the project still does only a small amount of what it has to do, we have severe integration problems, I assume some memory leaks. Is there anything possible to do in this case? I do realize that all my efforts didn't have any benefit, but the deadline is pretty soon and we have to do something. Did someone have a similar situation? Basically I thought that a good (well, I did everything that I could) start for the project would probably lead to something nice, however, I understand that I'm wrong. Any advice would be appreciated, sorry for my bad english.

    Read the article

  • What layer to introduce human readable error messages?

    - by MrLane
    One of the things that I have never been happy with on any project I have worked on over the years and have really not been able to resolve myself is exactly at what tier in an application should human readable error information be retrieved for display to a user. A common approach that has worked well has been to return strongly typed/concrete "result objects" from the methods on the public surface of the business tier/API. A method on the interface may be: public ClearUserAccountsResult ClearUserAccounts(ClearUserAccountsParam param); And the result class implementation: public class ClearUserAccountsResult : IResult { public readonly List<Account> ClearedAccounts{get; set;} public readonly bool Success {get; set;} // Implements IResult public readonly string Message{get; set;} // Implements IResult, human readable // Constructor implemented here to set readonly properties... } This works great when the API needs to be exposed over WCF as the result object can be serialized. Again this is only done on the public surface of the API/business tier. The error message can also be looked up from the database, which means it can be changed and localized. However, it has always been suspect to me, this idea of returning human readable information from the business tier like this, partly because what constitutes the public surface of the API may change over time...and it may be the case that the API will need to be reused by other API components in the future that do not need the human readable string messages (and looking them up from a database would be an expensive waste). I am thinking a better approach is to keep the business objects free from such result objects and keep them simple and then retrieve human readable error strings somewhere closer to the UI layer or only in the UI itself, but I have two problems here: 1) The UI may be a remote client (Winforms/WPF/Silverlight) or an ASP.NET web application hosted on another server. In these cases the UI will have to fetch the error strings from the server. 2) Often there are multiple legitimate modes of failure. If the business tier becomes so vague and generic in the way it returns errors there may not be enough information exposed publicly to tell what the error actually was: i.e: if a method has 3 modes of legitimate failure but returns a boolean to indicate failure, you cannot work out what the appropriate message to display to the user should be. I have thought about using failure enums as a substitute, they can indicate a specific error that can be tested for and coded against. This is sometimes useful within the business tier itself as a way of passing via method returns the specifics of a failure rather than just a boolean, but it is not so good for serialization scenarios. Is there a well worn pattern for this? What do people think? Thanks.

    Read the article

  • How to leverage the internal HTTP endpoint available on Azure web roles?

    - by Alfredo Delsors
    Imagine you have a Web application using an in-memory collection that changes occasionally but is used very often. The collection gets loaded from storage on the Application_Start global.asax event and is updated whenever its content changes. If you want to deploy this application on Azure you need to keep in mind that more than one instance of the application can be running at any time and therefore you need to provide some mechanism to keep all instances informed with the latest changes. Because the communication through internal endpoints between Azure role instances is at no cost, a good solution can be maintaining the information on Azure Storage Tables, reading its contents on the Application_Start event and populating its changes to all other instances using the internal HTTP port available on Azure Web Roles. You need to follow these steps to leverage the internal HTTP endpoint available on Azure web roles to maintain all instances up to date. 1.   Define an internal HTTP endpoint in the Web Role properties, for example InternalHttpEndpoint   2.   Add a new WCF service to the Web Role, for example NotificationService.svc 3.   Disable multiple site bindings in web.config: <serviceHostingEnvironment multipleSiteBindingsEnabled="false"> 4.   Add a method on the new service to receive notifications from other role instances. namespace Service { [ServiceContract] public interface INotificationService { [OperationContract(IsOneWay = true)] void Notify(Information info); } } 5.   Declare a class that inherits from System.ServiceModel.Activation.ServiceHostFactory and override the method CreateServiceHost to host the internal endpoint. public class InternalServiceFactory : ServiceHostFactory { protected override ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) { var internalEndpointAddress = string.Format( "http://{0}/NotificationService.svc", RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["InternalHttpEndpoint"].IPEndpoint); ServiceHost host = new ServiceHost( typeof(NotificationService), new Uri(internalEndpointAddress)); BasicHttpBinding binding = new BasicHttpBinding(SecurityMode.None); host.AddServiceEndpoint( typeof(INotificationService), binding, internalEndpointAddress); return host; } } Note that you can use SecurityMode.None because the internal endpoint is private to the instances of the service. 6.   Edit the markup of the service right clicking the svc file and selecting "View markup" to add the new factory as the factory to be used to create the service <%@ ServiceHost Language="C#" Debug="true" Factory="Service.InternalServiceFactory" Service="Service.NotificationService" CodeBehind="NotificationService.svc.cs" %> 7.   Now you can notify changes to other instances using this code: var current = RoleEnvironment.CurrentRoleInstance; var endPoints = current.Role.Instances .Where(instance => instance != current) .Select(instance => instance.InstanceEndpoints["InternalHttpEndpoint"]); foreach (var ep in endPoints) { EndpointAddress address = new EndpointAddress( String.Format("http://{0}/NotificationService.svc", ep.IPEndpoint)); BasicHttpBinding binding = new BasicHttpBinding(SecurityMode.None); var factory = new ChannelFactory<INotificationService>(binding); INotificationService instance = factory.CreateChannel(address); instance.Notify(changedinfo); }

    Read the article

  • Exchange 2010, Exchange 2003 Mail Flow issue

    - by Ryan Roussel
    While performing the initial Exchange 2010 deployment for a customer migrating from Exchange 2003, I ran into an issue with mail flow between the two environments.  The Exchange 2003 mailboxes could send to Exchange 2010, as well as to and from the internet.  Exchange 2010 mailboxes could send and receive to the internet, however they could not send to Exchange 2003 mailboxes.   After scouring the internet for a solution, it seemed quite a few people were experiencing this issue with no resolution to be found, or at least not easily.  After many attempts of manually deleting and recreating the routing group connectors,  I finally lucked onto the answer in an obscure comment left to another blogger.   If inheritable permissions are not allowed on the Exchange 2003 object in the Active Directory schema, exchange server authentication cannot be achieved between the servers.   It seems when Blackberry Enterprise Server gets added to 2003 environments, a lot of Admins get tricky and add the BES Admin user explicitly to the server object  to allow  inheritance down from there to all mailboxes.  The problem is they also coincidently turn off inheritance to the server object itself from its parent containers.  You can re-establish inheritance without overwriting the existing ACL however so that the BES Admin can remain in the server object ACL.   By re-establishing inheritance to the 2003 server object, mail flow was instantly restored between the servers.    To re-establish inheritance: 1. Open ASDIedit by adding the snap-in to a MMC (should be included on your 2008 server where Exchange 2010 is installed) 2. Navigate to Configuration > Services > Microsoft Exchange > Exchange Organization > Administrative Groups > First Administrative Group > Servers 3. In the right pane, right click on the CN=Server Name of your Exchange 2003 Server, select properties 4. Navigate to the Security tab, hit advanced toward the bottom. 5. Check the checkbox that reads “include inheritable permissions” toward the bottom of the dialogue box.

    Read the article

  • Need help identifing what resources (eg. In MIT OpenCourseWare) can help me prepare for a test [closed]

    - by jiewmeng
    I am entering uni soon. I can sit for a placement test to see if I elegible for exemptions. The details are http://www.comp.nus.edu.sg/undergraduates/TestScope11_12.html Or CS2100 Computer Organisation (please click title) The objective of this module is to familiarise students with the fundamentals of computing devices. Through this module students will understand the basics of data representation, and how the various parts of a computer work, separately and with each other. This allows students to understand the issues in computing devices, and how these issues affect the implementation of solutions. Topics covered include data representation systems, combinational and sequential circuit design techniques, assembly language, processor execution cycles, pipelining, memory hierarchy and input/output systems. Recommended Textbooks Digital Design: Principles and Practices [DDPP] by John F. Wakerly, Prentice-Hall. ISBN 0-13-324500-4. Computer Organizations and Design (The hardware/software interface) by David A. Patterson and John L. Hennessy. CS2105 Introduction to Computer Networks (please click title) This course aims to provide a broad introduction to computer networks and some appreciations of network application programming. It covers a range of topics including basic data communication and computer network concepts, protocols, networked computing concepts and principles, network applications development and network security. The emphasis of teaching is on the working principles and application of computer networks. As an integral part of the course, tutorials and practical assignments enforcing learning will also be given. These assignments provide an early exposure in network application programming and they should be able to complete by using personal computers and school's network facilities. Topics included: An overview of computer networks and the Internet Basic data communications Application layer Transport layer Network layer and routing Link layer and local area networks Recommended Textbook James F. Kurose & Keith W. Ross, Computer networking: A top-down approach featuring internet, Addison Wesley, 2001 I am wondering what resources eg. MIT OpenCourseWare or other universities resources are available to help he perpare for these particular modubles. I am thinking does the Networking one look like CCNA? The computer oragization. Its like electronics, assembly etc? I learnt some electronics in Poly but looking at the sample papers, uni looks very different... I have about 1 month to prepare if I want any chance of exempting from these modules :) any help?

    Read the article

  • OData &ndash; The easiest service I can create

    - by Jon Dalberg
    I wanted to create an OData service with the least amount of code so I fired up Visual Studio and got cracking. I decided to serve up a list of naughty words and make them read-only. Create a new web project. I created an empty MVC 2 application but MVC is not required for OData. Add a new WCF Data Service to the project. I named mine NastyWords.svc since I’m serving up a list of nasty words. Add a class to expose via the service: NastyWord 1: [DataServiceKey("Word")] 2: public class NastyWord 3: { 4: public string Word { get; set; } 5: }   I need to be able to uniquely identify instances of NastyWords for the DataService so I used the DataServiceKey attribute with the “Word” property as the key. I could have added an “ID” property which would have uniquely identified them and would then not need the “DataServiceKey” attribute because the DataService would apply some reflection and heuristics to guess at which property would be the unique identifier. However, the words themselves are unique so adding an “ID” property would be redundantly repetitive. Then I created a data source to expose my NastyWord objects to the service. This is just a simple class with IQueryable<T> properties exposing the entities for my service: 1: public class NastyWordsDataSource 2: { 3: private static IList<NastyWord> words = new List<NastyWord> 4: { 5: new NastyWord{ Word="crap"}, 6: new NastyWord{ Word="darn"}, 7: new NastyWord{ Word="hell"}, 8: new NastyWord{ Word="shucks"} 9: }; 10:   11: public NastyWordsDataSource() 12: { 13: NastyWords = words.AsQueryable(); 14: } 15:   16: public IQueryable<NastyWord> NastyWords { get; private set; } 17: }   Now I can go to the NastyWords.svc class and tell it which data source to use and which entities to expose: 1: public class NastyWords : DataService<NastyWordsDataSource> 2: { 3: // This method is called only once to initialize service-wide policies. 4: public static void InitializeService(DataServiceConfiguration config) 5: { 6: config.SetEntitySetAccessRule("*", EntitySetRights.AllRead); 7: config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; 8: } 9: }   Compile and browse to my NastWords.svc and weep with joy Now I can query my service just like any other OData service. Next time, I’ll modify this service to allow updates to sent so I can build up my list of nasty words. Enjoy!

    Read the article

  • How granular should a command be in a CQ[R]S model?

    - by Aaronaught
    I'm considering a project to migrate part of our WCF-based SOA over to a service bus model (probably nServiceBus) and using some basic pub-sub to achieve Command-Query Separation. I'm not new to SOA, or even to service bus models, but I confess that until recently my concept of "separation" was limited to run-of-the-mill database mirroring and replication. Still, I'm attracted to the idea because it seems to provide all the benefits of an eventually-consistent system while sidestepping many of the obvious drawbacks (most notably the lack of proper transactional support). I've read a lot on the subject from Udi Dahan who is basically the guru on ESB architectures (at least in the Microsoft world), but one thing he says really puzzles me: As we get larger entities with more fields on them, we also get more actors working with those same entities, and the higher the likelihood that something will touch some attribute of them at any given time, increasing the number of concurrency conflicts. [...] A core element of CQRS is rethinking the design of the user interface to enable us to capture our users’ intent such that making a customer preferred is a different unit of work for the user than indicating that the customer has moved or that they’ve gotten married. Using an Excel-like UI for data changes doesn’t capture intent, as we saw above. -- Udi Dahan, Clarified CQRS From the perspective described in the quotation, it's hard to argue with that logic. But it seems to go against the grain with respect to SOAs. An SOA (and really services in general) are supposed to deal with coarse-grained messages so as to minimize network chatter - among many other benefits. I realize that network chatter is less of an issue when you've got highly-distributed systems with good message queuing and none of the baggage of RPC, but it doesn't seem wise to dismiss the issue entirely. Udi almost seems to be saying that every attribute change (i.e. field update) ought to be its own command, which is hard to imagine in the context of one user potentially updating hundreds or thousands of combined entities and attributes as it often is with a traditional web service. One batch update in SQL Server may take a fraction of a second given a good highly-parameterized query, table-valued parameter or bulk insert to a staging table; processing all of these updates one at a time is slow, slow, slow, and OLTP database hardware is the most expensive of all to scale up/out. Is there some way to reconcile these competing concerns? Am I thinking about it the wrong way? Does this problem have a well-known solution in the CQS/ESB world? If not, then how does one decide what the "right level" of granularity in a Command should be? Is there some "standard" one can use as a starting point - sort of like 3NF in databases - and only deviate when careful profiling suggests a potentially significant performance benefit? Or is this possibly one of those things that, despite several strong opinions being expressed by various experts, is really just a matter of opinion?

    Read the article

  • Use Expressions with LINQ to Entities

    - by EltonStoneman
    [Source: http://geekswithblogs.net/EltonStoneman] Recently I've been putting together a generic approach for paging the response from a WCF service. Paging changes the service signature, so it's not as simple as adding a behavior to an existing service in config, but the complexity of the paging is isolated in a generic base class. We're using the Entity Framework talking to SQL Server, so when we ask for a page using LINQ's .Take() method we get a nice efficient SQL query for just the rows we want, with minimal impact on SQL Server and network traffic. We use the maximum ID of the record returned as a high-water mark (rather than using .Skip() to go to the next record), so the approach caters for records being deleted between page requests. In the paged response we include a HasMorePages indicator, computed by comparing the max ID in the page of results to the max ID for the whole resultset - if the latter is bigger, then there are more pages. In some quick performance testing, the paged version of the service performed much more slowly than the unpaged version, which was unexpected. We narrowed it down to the code which gets the max ID for the full resultset - instead of building an efficient MAX() SQL query, EF was returning the whole resultset and then computing the max ID in the service layer. It's easy to reproduce - take this AdventureWorks query:             var context = new AdventureWorksEntities();             var query = from od in context.SalesOrderDetail                         where od.ModifiedDate >= modified                          && od.SalesOrderDetailID.CompareTo(id) > 0                         orderby od.SalesOrderDetailID                         select od;   We can find the maximum SalesOrderDetailID like this:             var maxIdEfficiently = query.Max(od => od.SalesOrderDetailID);   which produces our efficient MAX() SQL query. If we're doing this generically and we already have the ID function in a Func:             Func<SalesOrderDetail, int> idFunc = od => od.SalesOrderDetailID;             var maxIdInefficiently = query.Max(idFunc);   This fetches all the results from the query and then runs the Max() function in code. If you look at the difference in Reflector, the first call passes an Expression to the Max(), while the second call passes a Func. So it's an easy fix - wrap the Func in an Expression:             Expression<Func<SalesOrderDetail, int>> idExpression = od => od.SalesOrderDetailID;             var maxIdEfficientlyAgain = query.Max(idExpression);   - and we're back to running an efficient MAX() statement. Evidently the EF provider can dissect an Expression and build its equivalent in SQL, but it can't do that with Funcs.

    Read the article

  • TransportWithMessageCredential & Service Bus – Introduction

    - by Michael Stephenson
    Recently we have been working on a project using the Windows Azure Service Bus to expose line of business applications. One of the topics we discussed a lot was around the security aspects of the solution. Most of the samples you see for Windows Azure Service Bus often use the shared secret with the Access Control Service to protect the service bus endpoint but one of the problems we found was that with this scenario any claims resulting from credentials supplied by the client are not passed through to the service listening to the service bus endpoint. As an example of this we originally were hoping that we could give two different clients their own shared secret key and the issuer for each would indicate which client it was. If the claims had flown to the listening service then we could check that the message sent by client one was a type they are allowed to send. Unfortunately this claim isn't flown to the listening service so we were unable to implement this scenario. We had also seen samples that talk about changing the relayClientAuthenticationType attribute would allow you to authenticate the client within the service itself rather than with ACS. While this was interesting it wasn't exactly what we wanted. By removing the step where access to the Relay endpoint is protected by authentication against ACS it means that anyone could send messages via the service bus to the on-premise listening service which would then authenticate clients. In our scenario we certainly didn't want to allow clients to skip the ACS authentication step because this could open up two attack opportunities for an attacker. The first of these would allow an attacker to send messages through to our on-premise servers and potentially cause a denial of service situation. The second case would be with the same kind of attack by running lots of messages through service bus which were then rejected the attacker would be causing us to incur charges per message on our Windows Azure account. The correct way to implement our desired scenario is to combine one of the common options for authenticating against ACS so the service bus endpoint cannot be accessed by an unauthenticated caller with the normal WCF security features using the TransportWithMessageCredential security option. Looking around I could not find any guidance on how to implement this correctly so on the back of setting this up I decided to write a couple of articles to walk through a couple of the common scenarios you may be interested in. These are available on the following links: Walkthrough - Combining shared secret and username token Walkthrough – Combining shared secret and certificates

    Read the article

  • Wifi problems after upgrading to 13.10

    - by Simon
    I just upgraded to Ubuntu 13.10, but since the upgrade I don't have internet access via wifi anymore. I can: See networks Connect to a network Ping myself (localhost, 192.168.0.103) I can't: Ping others (including other devices on the same wireless network, including the gateway/router) Resolve hosts Access any other external resource, whether on my own network or on the internet Using Wireshark, I noticed my computer is continuously sending ARP-requests like "Who has 192.168.0.1 [which is the gateway]? Tell 192.168.0.103". It doesn't get any replies though. When I ping another IP-address for which it knows the mac-address (from cache), it turns out a packet loss of 90% occurs, and even if a packet manages to arrive it takes around 3000ms. The output of route -n is: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 eth1 192.168.0.0 0.0.0.0 255.255.255.0 U 9 0 0 eth1 192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0 Before upgrading, wifi worked fine. Using other devices, wifi still works fine.Resetting the router didn't help. Ethernet still works after upgrading. Any suggestions? Update: I'm using the wl driver. Here's the relevant output of some commands: lspci | grep Wireless 03:00.0 Network controller: Broadcom Corporation BCM4313 802.11bgn Wireless Network Adapter (rev 01) cat /etc/modprobe.d/blacklist.conf [...] blacklist mac80211 blacklist brcm80211 blacklist cfg80211 blacklist lib80211_crypt_tkip blacklist lib80211 blacklist b43 cat /etc/rc.local sudo modprobe -r lib80211 sudo insmod /lib/modules/3.2.0-30-generic-pae/kernel/net/wireless/lib80211.ko sudo insmod /lib/modules/3.2.0-30-generic-pae/kernel/net/wireless/lib80211_crypt_wep.ko sudo insmod /lib/modules/3.2.0-30-generic-pae/kernel/net/wireless/lib80211_crypt_tkip.ko sudo insmod /lib/modules/3.2.0-30-generic-pae/kernel/net/wireless/lib80211_crypt_ccmp.ko sudo modprobe wl exit 0 The last lines are probably how I got wireless working after the previous upgrade (wireless has been a problem after each upgrade). Update 2: added information about the exact hardware below. The hardware is an integrated device, so I ran lspci -nn | grep -i network. The output is: 03:00.0 Network controller [0280]: Broadcom Corporation BCM4313 802.11bgn Wireless Network Adapter [14e4:4727] (rev 01)

    Read the article

  • PHP - Internal APIs/Libraries - What makes sense?

    - by Mark Locker
    I've been having a discussion lately with some colleagues about the best way to approach a new project, and thought it'd be interesting to get some external thoughts thrown into the mix. Basically, we're redeveloping a fairly large site (written in PHP) and have differing opinions on how the platform should be setup. Requirements: The platform will need to support multiple internal websites, as well as external (non-PHP) projects which at the moment consist of a mobile app and a toolbar. We have no plans/need in the foreseeable future to open up an API externally (for use in products other than our own). My opinion: We should have a library of well documented native model classes which can be shared between projects. These models will represent everything in our database and can take advantage of object orientated features such as inheritance, traits, magic methods, etc. etc. As well as employing ORM. We can then add an API layer on top of these models which can basically accept requests and route them to the appropriate methods, translating the response so that it can be used platform independently. This routing for each method can be setup as and when it's required. Their opinion: We should have a single HTTP API which is used by all projects (internal PHP ones or otherwise). My thoughts: To me, there are a number of issues with using the sole HTTP API approach: It will be very expensive performance wise. One page request will result in several additional http requests (which although local, are still ones that Apache will need to handle). You'll lose all of the best features PHP has for OO development. From simple inheritance, to employing the likes of ORM which can save you writing a lot of code. For internal projects, the actual process makes me cringe. To get a users name, for example, a request would go out of our box, over the LAN, back in, then run through a script which calls a method, JSON encodes the output and feeds that back. That would then need to be JSON decoded, and be presented as an array ready to use. Working with arrays, as appose to objects, makes me sad in a modern PHP framework. Their thoughts (and my responses): Having one method of doing thing keeps things simple. - You'd only do things differently if you were using a different language anyway. It will become robust. - Seeing as the API will run off the library of models, I think my option would be just as robust. What do you think? I'd be really interested to hear the thoughts of others on this, especially as opinions on both sides are not founded on any past experience.

    Read the article

  • Looking for an algorithm to connect dots - shortest route

    - by e4ch
    I have written a program to solve a special puzzle, but now I'm kind of stuck at the following problem: I have about 3200 points/nodes/dots. Each of these points is connected to a few other points (usually 2-5, theoretical limit is 1-26). I have exactly one starting point and about 30 exit points (probably all of the exit points are connected to each other). Many of these 3200 points are probably not connected to neither start nor end point in any way, like a separate net, but all points are connected to at least one other point. I need to find the shortest number of hops to go from entry to exit. There is no distance between the points (unlike the road or train routing problem), just the number of hops counts. I need to find all solutions with the shortest number of hops, and not just one solution, but all. And potentially also solutions with one more hop etc. I expect to have a solution with about 30-50 hops to go from start to exit. I already tried: 1) randomly trying possibilities and just starting over when the count was bigger than a previous solution. I got first solution with 3500 hops, then it got down to about 97 after some minutes, but looking at the solutions I saw problems like unnecessary loops and stuff, so I tried to optimize a bit (like not going back where it came from etc.). More optimizations are possible, but this random thing doesn't find all best solutions or takes too long. 2) Recursively run through all ways from start (chess-problem-like) and breaking the try when it reached a previous point. This was looping at about a length of 120 nodes, so it tries chains that are (probably) by far too long. If we calculate 4 possibilities and 120 nodes, we're reaching 1.7E72 possibilities, which is not possible to calculate through. This is called Depth-first search (DFS) as I found out in the meantime. Maybe I should try Breadth-first search by adding some queue? The connections between the points are actually moves you can make in the game and the points are how the game looks like after you made the move. What would be the algorithm to use for this problem? I'm using C#.NET, but the language shouldn't matter.

    Read the article

  • How do I server multiple domains from the same directory and codebase without my configuraton breaking when apache.conf is overwritten?

    - by neokio
    I have 20 domains on a VPS running cPanel. One public_html is filled with code, the remaining 19 are symbolic links to that one. (For example, assets is a directory within public_html ... for the 19 others, there's a symbolic link to that directory in each each accounts public_html dir.) It's all PHP / MySQL database driven, with content changing depending on the domain. It works like a charm, assuming cPanel has suExec enabled correctly, and assuming apache.conf does NOT have SymLinksIfOwnerMatch enabled. However, every few weeks, my apache.conf is mysteriously overwritten, re-enabling SymLinksIfOwnerMatch, and disabling all 19 linked sites for as long as it takes for me to notice. Here's the offending line in apache.conf: <Directory "/"> AllowOverride All Options ExecCGI FollowSymLinks IncludesNOEXEC Indexes SymLinksIfOwnerMatch </Directory> The addition of SymLinksIfOwnerMatch disables the sites in a strange way ... the html is generated correctly, but all css/js/image in the html fails to load. Clicking any link redirects to /. And I have no idea why. I do have a few things in my .htaccess, which work fine when SymLinksIfOwnerMatch is not present: <IfModule mod_rewrite.c> # www.example.com -> example.com RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^ http://%1%{REQUEST_URI} [R=301,L] # Remove query strings from static resources RewriteRule ^assets/js/(.*)_v(.*)\.js /assets/js/$1.js [L] RewriteRule ^assets/css/(.*)_v(.*)\.css /assets/css/$1.css [L] RewriteRule ^assets/sites/(.*)/(.*)_v(.*)\.css /assets/sites/$1/$2.css [L] # Block access to hidden files and directories RewriteCond %{SCRIPT_FILENAME} -d [OR] RewriteCond %{SCRIPT_FILENAME} -f RewriteRule "(^|/)\." - [F] # SLIR ... reroute images to image processor RewriteCond %{REQUEST_URI} ^/images/.*$ RewriteRule ^.*$ - [L] # ignore rules if URL is a file RewriteCond %{REQUEST_FILENAME} !-f # ignore rules if URL is not php #RewriteCond %{REQUEST_URI} !\.php$ # catch-all for routing RewriteRule . index.php [L] </ifModule> I also use most of the 5G Blacklist 2013 for protection against exploits and other depravities. Again, all of this works great, except when SymLinksIfOwnerMatch gets added back into apache.conf. Since I've failed to find the cause of whatever cPanel/security update is overwriting apache.conf, I thought there might be a more correct way to accomplish my goal using group permissions. I've created a 'www' group, added all accounts to the group, and chmod -R'd the code source to use that group. Everything is 644 or 755. But doesn't seem to be enough. My unix isn't that strong. Do you need to restart something for group changes to take effect? Probably not. Anyways, I'm entering unknown territory. Can anyone recommend the right way to configure a website for multiple sites using one codebase that doesn't rely on apache.conf?

    Read the article

  • BizTalk Orchestration & Port Tutorial Part 2

    - by bosuch
    In Part 1 I showed how to create and publish a simple Orchestration demo. Now we’ll finish configuring it in the admin console and test it. Open the BizTalk Server 2009 Administration Console, and expand BizTalk Server 2009 Administration, then Applications. You should have an entry for OrchestrationPortDemo – expand it as well. First, we’ll add the Receive Port – the place that we’ll drop the test file. Right-click on Receive Ports and select New One-way Receive Port. On the General tab, name it InputPort, then click over to Receive Locations.   Click New to add a new location. Your receive location can be FTP, SQL, WCF, SharePoint, or many other choices, but for this demo we’ll add a File location. Click the Configure button and set a receive folder (something like “C:\PortDemo\”) and a file mask (stick with “*.xml” for now) and click OK three times to create your Receive Port.   Next we’ll create the Send port – the location where BizTalk will drop the file. Right-click on Send Ports and choose New Static One-way Send Port. Give it an appropriate name, and configure the FILE Transport Properties as shown:   Click OK twice and your Send Port will be created. Now we’ll configure the Orchestration Bindings. Click on Orchestrations, then right-click the orchestration itself and select Properties. Select the Bindings tab. Choose BizTalkServerApplication as the host, and select the Send and Receive ports you previously created, as shown:   Now it’s time to fire everything up. Right-click on the send port you created and click Start. Once the Status column displays “Started”, click on Receive Locations and Enable the Receive Location previously created. Finally, start the Orchestration. Now, time to test! Create a simple xml file like: <root>    <Node1>Test</Node1>    <Node2>Test</Node2> </root> And drop it into the C:\PortDemo folder. After a couple of seconds the file should disappear – this indicates BizTalk has picked it up for processing. Look in the C:\PortDemo\Output folder and you should see an xml file with a GUID for a name, like {7C50104F-FC3E-4A49-B2FA-4F560A37636D}.xml. Open it to verify that it matches your input file. Practically, this demo doesn’t do a whole heck of a lot, but it shows you the basics for building, publishing and running an orchestration.

    Read the article

  • HAProxy reqrep remove URI on backend request

    - by Jim
    real quick question regarding HAProxy reqrep. I am trying to rewrite/replace the request that gets sent to the backend. I have the following example domain and URIs http://domain/web1 http://domain/web2 I want web1 to go to backend webfarm1, and web2 to go to webfarm2. Currently this does happen. However I want to strip off the web1 or web2 URI when the request is sent to the backend. Here is my haproxy.cfg frontend webVIP_80 mode http bind :80 #acl routing to backend acl web1_path path_beg /web1 acl web2_path path_beg /web2 #which backend use_backend webfarm1 if web1_path use_backend webfarm2 if web2_path default_backend webfarm1 backend webfarm1 mode http reqrep ^([^\ ]*)\ /web1/(.*) \1\ /\2 balance roundrobin option httpchk HEAD /index HTTP/1.1\r\nHost:\ example.com server webtest1 10.0.0.10:80 weight 5 check slowstart 5000ms server webtest2 10.0.0.20:80 weight 5 check slowstart 5000ms backend webfarm2 mode http reqrep ^([^\ ]*)\ /web2/(.*) \1\ /\2 balance roundrobin option httpchk HEAD /index HTTP/1.1\r\nHost:\ example.com server webtest1-farm2 10.0.0.110:80 weight 5 check slowstart 5000ms server webtest2-farm2 10.0.0.120:80 weight 5 check slowstart 5000ms If I go to http://domain/web1 or http://domain/web2 I see it in the error logs that the request on a server in each backend that the requst is for the resource /web1 or /web2 respectively. Therefore I believe there to be something wrong with my regular expression, even though I copied and pasted it from the Documentation. http://code.google.com/p/haproxy-docs/wiki/reqrep Summary: I'm trying to route traffic based on URI, however I want to strip the URI on the backend side. Go to http://domain/web1 -- backend request of / to webfarm1 Thank you! -Jim

    Read the article

  • RouterOS on Hyper-V (v3/2012) - any way to get it working?

    - by TomTom
    Trying to set up a small VPN point to connect into a remote Hyper-V cluster using ROuterOS. Anyone got it working ON Hyper-V with the latest builds of RouterOS? It seems the legacy network adapter is not supported anymore either (or just broken). The platform is a Windows Server 2012 RC. This is not a high performance setup - the RouterOS wont do the routing for more than the backend administrative access, and the only real traffic we will see there is when ISO images for new operating systems are uploaded. Otherwise we will have possibly RDP traffic as well as web / http traffioc, but this is internal only (dashboards, some control panel). The server has no public business. So the price for non virtualized network cards is ok for me. After hooking up - ping just does not work. After some time I see in windows (arp -a on the command line), so I know that the Hyper-V side is set up properly. Just no packets arrived. I have turned off all protection on Hyper-V (or : not turned them on), so no MAC spoofing protection etc. in the Advanced page for the legacy adapters. Unless I can get it work I will have to resort to using a windows install as router / VPN endpoint, which introduces another OS into the fabric (we run all routers etc. so far on mikrotik in hardware, which is why I want this one to be RouterOS, too). And no, putting hardware there is NOT an option - the cost would be significant.

    Read the article

  • Configuring VirtualBox host only networking: OSX host, Ubuntu guest

    - by Greg K
    I have a Ubuntu guest configured with two interfaces, eth0 is using NAT and works fine, I can access the net. The second interface eth1 is set to host only networking and VirtualBox has created a vboxnet0 virtual adapter on the host. I've configured vboxnet0 in VirtualBox adapter settings with the following: ip 192.168.21.20 subnet 255.255.255.0 Once the VM guest is running, ifconfig on OSX has vboxnet0 setup as: vboxnet0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether 0a:00:27:00:00:00 inet 192.168.21.20 netmask 0xffffff00 broadcast 192.168.21.255 In the guest, eth0 is set to use DHCP, I've statically assigned eth1 to 192.168.21.20 (is this a mistake?): auto eth1 iface eth1 inet static address 192.168.21.20 netmask 255.255.255.0 network 192.168.21.0 broadcast 192.168.21.255 gateway 192.168.21.1 There is no device on 192.168.21.1 - what should I set my gateway to? In the guest the routes look like so: Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.21.0 * 255.255.255.0 U 0 0 0 eth1 10.0.2.0 * 255.255.255.0 U 0 0 0 eth0 default 10.0.2.2 0.0.0.0 UG 100 0 0 eth0 default 192.168.21.1 0.0.0.0 UG 100 0 0 eth1 Route table on OSX: $ netstat -nr Routing tables Internet: Destination Gateway Flags Refs Use Netif Expire default 10.77.36.1 UGSc 28 0 en1 10.77.36/22 link#5 UCS 5 0 en1 10.77.39.38 127.0.0.1 UHS 1 2236 lo0 10.77.39.255 link#5 UHLWbI 1 66 en1 127 127.0.0.1 UCS 0 0 lo0 127.0.0.1 127.0.0.1 UH 1 8642 lo0 169.254 link#5 UCS 0 0 en1 192.168.21 link#7 UC 2 0 vboxnet 192.168.21.20 a:0:27:0:0:0 UHLWI 0 4 lo0 192.168.21.255 link#7 UHLWbI 2 64 vboxnet I can't SSH from the host to the guest (I used to be able to when the VM was configured with a bridged connection): $ ssh 192.168.21.20 ssh: connect to host 192.168.21.20 port 22: Connection refused What have I done wrong here? TIA

    Read the article

  • Configuring default gateway returned by dhcp server

    - by comp1mp
    Hello, I have a machine which connects via ethernet to a private LAN, and wireless to a network which provides internet connectivity. The private LAN uses a wireless router to perform DHCP. The problem is that the wireless and NIC adapters have different default gateways. The default gateway for the private LAN has a lower adapter metric, and is thus chosen by the routing algorithm. I am thus unable to browse the internet when connected to both adapters. The following link has a solution for manually setting the adapter metric to a high number. http://superuser.com/questions/77822/how-to-tell-windows-7-to-ignore-a-default-gateway I was hoping to find a different solution. Does any one know of a router that allows you to configure its DHCP server to return an empty default gateway? I cannot find such an option for my linksys wrt300n. Configuring a static ip address with no default gateway does work, however I would like to use DHCP if possible. Does anyone know of a different way to specify a default gateway for a windows 7 machine with multiple network adapters without mucking with the adapter metric? Thanks, Matthew

    Read the article

  • Connecting to ItsHidden in Ubuntu 9.10 problems

    - by Ionel Bratianu
    I try to setup a VPN connection to ItsHidden on Ubuntu 9.10. I double-checked my credentials in the VPN configuration, but I don't think that this is problem. In my syslog I got these messages: Jan 11 14:38:46 NetworkManager: Starting VPN service 'org.freedesktop.NetworkManager.pptp'... Jan 11 14:38:46 NetworkManager: VPN service 'org.freedesktop.NetworkManager.pptp' started (org.freedesktop.NetworkManager.pptp), PID 4502 Jan 11 14:38:46 NetworkManager: VPN service 'org.freedesktop.NetworkManager.pptp' just appeared, activating connections Jan 11 14:38:46 NetworkManager: VPN plugin state changed: 1 Jan 11 14:38:46 NetworkManager: VPN plugin state changed: 3 Jan 11 14:38:46 pppd[4506]: Plugin /usr/lib/pppd/2.4.5//nm-pptp-pppd-plugin.so loaded. Jan 11 14:38:46 NetworkManager: VPN connection 'ItsHidden' (Connect) reply received. Jan 11 14:38:46 pppd[4506]: pppd 2.4.5 started by root, uid 0 Jan 11 14:38:46 pppd[4506]: Using interface ppp0 Jan 11 14:38:46 NetworkManager: SCPlugin-Ifupdown: devices added (path: /sys/devices/virtual/net/ppp0, iface: ppp0) Jan 11 14:38:46 NetworkManager: SCPlugin-Ifupdown: device added (path: /sys/devices/virtual/net/ppp0, iface: ppp0): no ifupdown configuration found. Jan 11 14:38:46 pppd[4506]: Connect: ppp0 /dev/pts/1 Jan 11 14:39:06 pptp[4508]: nm-pptp-service-4502 fatal[get_ip_address:pptp.c:430]: gethostbyname 'vpn.itshidden.com': HOST NOT FOUND Jan 11 14:39:06 pppd[4506]: Modem hangup Jan 11 14:39:06 pppd[4506]: Connection terminated. Jan 11 14:39:06 NetworkManager: VPN plugin failed: 1 Jan 11 14:39:06 NetworkManager: SCPlugin-Ifupdown: devices removed (path: /sys/devices/virtual/net/ppp0, iface: ppp0) Jan 11 14:39:06 pppd[4506]: Exit. Jan 11 14:39:06 NetworkManager: VPN plugin failed: 1 Jan 11 14:39:06 NetworkManager: VPN plugin failed: 1 Jan 11 14:39:06 NetworkManager: VPN plugin state changed: 6 Jan 11 14:39:06 NetworkManager: VPN plugin state change reason: 0 Jan 11 14:39:06 NetworkManager: connection_state_changed(): Could not process the request because no VPN connection was active. Jan 11 14:39:06 NetworkManager: Policy set 'Auto eth0' (eth0) as default for routing and DNS. Jan 11 14:39:19 NetworkManager: [1263213559.003098] ensure_killed(): waiting for vpn service pid 4502 to exit Jan 11 14:39:19 NetworkManager: [1263213559.003289] ensure_killed(): vpn service pid 4502 cleaned up Because the gethostbyname is failing, I suppose that the NetworkManager doesn't know that I use proxies for accessing Internet. I'm not sure that this is the real problem. Could you tell me a solution to make gesthostbyname not failing anymore?

    Read the article

  • Existing open source software for wireless mesh networking?

    - by user70352
    Hello! My goal: Build a wireless mesh network with some ALIX 2D2 (500 MHz AMD Geode LX800 x86 CPU, 256MB RAM, Atheros wireless card) Aside from working like a normal wireless mesh network users should be able to read/write data from/to the ALIX Boxes and the ALIX boxes should be able to process data. Questions: Should I try to flash dd-wrt x86, voyage linux (linux.voyage.hk) or something else? What (open source) software should I look into before I start? Should I use a 'server' for data storage and processing instead of the ALIX boxes? Is it even possible to use the ALIX boxes for routing AND data storage and processing? Final notes: Data can be anything, for example, I want to setup a wireless mesh using the OLSRD protocol so my whole town gets wifi and can access songs on the network. It's not for that, but that's the idea. I'm not afraid of programing, compiling or working with *nix. This is mostly 'for fun' rather than for practicality. Thanks in advance for any feedback.

    Read the article

  • Avoiding DNS timeouts when a dns server fails

    - by user65124
    Hi there. We have a small datacenter with about a hundred hosts pointing to 3 internal dns servers (bind 9). Our problem comes when one of the internal dns servers becomes unavailable. At that point all the clients that point to that server start performing very slowly. The problem seems to be that the stock linux resolver doesn't really have the concept of "failing over" to a different dns server. You can adjust the timeout and number of retries it uses, (and set rotate so it will work through the list), but no matter what settings one uses our services perform much more slowly if a primary dns server becomes unavailable. At the moment this is one of the largest sources of service disruptions for us. My ideal answer would be something like "RTFM: tweak /etc/resolv.conf like this...", but if that's an option I haven't seen it. I was wondering how other folks handled this issue? I can see 3 possible types of solutions: Use linux-ha/Pacemaker and failover ips (so the dns IP VIPs are "always" available). Alas, we don't have a good fencing infrastructure, and without fencing pacemaker doesn't work very well (in my experience Pacemaker lowers availability without fencing). Run a local dns server on each node, and have resolv.conf point to localhost. This would work, but it would give us a lot more services to monitor and manage. Run a local cache on each node. Folks seem to consider nscd "broken", but dnrd seems to have the right feature set: it marks dns servers as up or down, and won't use 'down' dns servers. Any-casting seems to work only at the ip routing level, and depends on route updates for server failure. Multi-casting seemed like it would be a perfect answer, but bind does not support broadcasting or multi-casting, and the docs I could find seem to suggest that multicast dns is more aimed at service discovery and auto-configuration rather than regular dns resolving. Am I missing an obvious solution?

    Read the article

  • Cannot connect to a VPN server - authentication failed with error code 691

    - by stacker
    When trying to connect to a VPN server, I get the 691 error code on the client, which say: Error Description: 691: The remote connection was denied because the user name and password combination you provided is not recognized, or the selected authentication protocol is not permitted on the remote access server. I validated that the username and password are correct. I also installed a certification to use with the IKEv2 security type. I also validated that the VPN server support security method. But I cannot login. In the server log I get this log: Network Policy Server denied access to a user. The user DomainName\UserName connected from IP address but failed an authentication attempt due to the following reason: The remote connection was denied because the user name and password combination you provided is not recognized, or the selected authentication protocol is not permitted on the remote access server. Any idea of what can I do? Thanks in advance! Log Name: Security Source: Microsoft-Windows-Security-Auditing Date: 12/29/2010 7:12:20 AM Event ID: 6273 Task Category: Network Policy Server Level: Information Keywords: Audit Failure User: N/A Computer: VPN.domain.com Description: Network Policy Server denied access to a user. Contact the Network Policy Server administrator for more information. User: Security ID: domain\Administrator Account Name: domain\Administrator Account Domain: domani Fully Qualified Account Name: domain.com/Users/Administrator Client Machine: Security ID: NULL SID Account Name: - Fully Qualified Account Name: - OS-Version: - Called Station Identifier: 192.168.147.171 Calling Station Identifier: 192.168.147.191 NAS: NAS IPv4 Address: - NAS IPv6 Address: - NAS Identifier: VPN NAS Port-Type: Virtual NAS Port: 0 RADIUS Client: Client Friendly Name: VPN Client IP Address: - Authentication Details: Connection Request Policy Name: Microsoft Routing and Remote Access Service Policy Network Policy Name: All Authentication Provider: Windows Authentication Server: VPN.domain.home Authentication Type: EAP EAP Type: Microsoft: Secured password (EAP-MSCHAP v2) Account Session Identifier: 313933 Logging Results: Accounting information was written to the local log file. Reason Code: 16 Reason: Authentication failed due to a user credentials mismatch. Either the user name provided does not map to an existing user account or the password was incorrect.

    Read the article

  • ADSIEdit Cleanup After Exchange 2003 Crash During Transition To Exchange 2010

    - by ThaKidd
    Hello all. I would value some input from a few Exchange 2010 experts. I have almost completed the transition from Exchange 2003 Standard to Exchange 2010 Standard. Everything went smoothly until I tried to uninstall Exchange 2003. At that point the server bit the dust and died completely. I now have NO access to the old Exchange System Management MMC as I am running Windows 2008 SR2 and Windows 7 only. I can only fix this with ADSIEdit, EMShell, and EMConsole. I have used the 2010 shell to move/remove/verify that all mailboxes, public folders and OAB are hosted on Exchange 2010. I also verified that the routing connector has been deleted. The only two things that were not done was to remove the Recipient Update Service and actually perform the removal of the 2003 software. I have spent a lot of time going through ASDIedit and have located the old Administrative Group and the Exchange 2003 server listed under it. I also located the Recipient Update Service which includes two entries; Enterprise and my domain name. I have read that it is an unwise idea to remove the old administrative group so I won't bother messing with that. I am repeatedly getting three warnings in the Application Log. Both are from MSExchangeTransport EventID 5006 (Cannot find route to Mailbox Server OLDSERVER) and 5020 (The topology doesn't contain a route to Exchange 2000 Server or Exchange Server 2003) So my questions are: To clean out AD of the old Exchange 2003 info, can I delete the server name folder (Configuration - Services - Microsoft Exchange - ExchOrg - Administrative Groups - First Administrative Group - Servers - Old Server) and also delete the Update Recipient Service (Enterprise) and Update Recipient Service (DOMAIN) containers safely? Are there any additional items I need to address to ensure the AD is clean? Thanks in advance for your help!

    Read the article

  • pfSense Firewall or Linsys/Cisco router for small offices

    - by Tim Meers
    I'm about to start switching some networks around for multiple small offices. Each office has about 10 to 15 users and 10 to 15 computers. Each office has a spread of generic routers and access points. The routers vary from being used as routers, to just being an access point for wireless. Nothing formal has really ever beem implemented for each of the 10 offices. What I'm wanting is to set up a pfSense box for each office to configure things like: traffic shaping (for VoIP QOS) URL Filtering DHCP static routing multiple VLANs I'll then use some of the existing hardware for wireless. Maybe even integrate the wireless right into the firewall depending on the office layout. So my question, would this be better to do a full blown firewall box, or but a new business class or high end consumer class Linksys router to do the URL filtering, QOS and DHPC? Each option could allow for remote access and VPN for remote maintnance and each would only cost a nominal about of money for something decent, i.e. under $250.

    Read the article

  • How can I forward an application with X11 in grayscale

    - by ??????? ???????????
    I am trying to run a graphical application at home and display it on a it on a laptop which is located about six routing hops away. The problem is that the connection is so slow (or rather there is so much GOOEY being transfered) that the mouse is unresponsive and it takes a "long time" to redraw the window even at a resolution of 800x600 pixels. The connection speeds are 10MBit up at home and about 1MBit down on the laptop, which I think should be sufficient for looking at some GUI in (almost) real time. Since this traffic is sent over over a secure shell, I have enabled Compression with highest CompressionLevel along with Ciphers set to blowfish-cbc. This has substantially improved the responsiveness of the application, making it nearly usable. However, my goal is to improve the performance even further by sacrificing colors and even frame rate. The application to be displayed a Qemu SDL window with a graphically-oriented OS in it. This is not strictly relevant, but perhaps there are options to tweak the SDL output which I am not aware of. A possible workaround would be to run the application in a "hidden" X server and enabling TigerVNC on that X server. This would automatically give me the benefits of an optimized VNC viewport, but the goal is to do without (reduce complexity). The question I'm asking is what are my options for reducing the data-rate generated on the server in order to make the graphical application more usable on the client. As mentioned, colors are not important and I could probably work with 5-16 fps. Both machines are running Gentoo with the software in question being: workstation X.Org X Server 1.10.4 OpenSSH_5.8p1-hpn13v10, OpenSSL 1.0.0e QEMU emulator version 0.15.1 (qemu-kvm-0.15.1) laptop X.Org X Server 1.12.2 OpenSSH_5.8p1-hpn13v10lpk, OpenSSL 1.0.0j

    Read the article

< Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >