Search Results

Search found 41235 results on 1650 pages for 'source control bindings'.

Page 7/1650 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Odd company release cycle: Go Distributed Source Control?

    - by MrLane
    sorry about this long post, but I think it is worth it! I have just started with a small .NET shop that operates quite a bit differently to other places that I have worked. Unlike any of my previous positions, the software written here is targetted at multiple customers and not every customer gets the latest release of the software at the same time. As such, there is no "current production version." When a customer does get an update, they also get all of the features added to he software since their last update, which could be a long time ago. The software is highly configurable and features can be turned on and off: so called "feature toggles." Release cycles are very tight here, in fact they are not on a shedule: when a feature is complete the software is deployed to the relevant customer. The team only last year moved from Visual Source Safe to Team Foundation Server. The problem is they still use TFS as if it were VSS and enforce Checkout locks on a single code branch. Whenever a bug fix gets put out into the field (even for a single customer) they simply build whatever is in TFS, test the bug was fixed and deploy to the customer! (Myself coming from a pharma and medical devices software background this is unbeliveable!). The result is that half baked dev code gets put into production without being even tested. Bugs are always slipping into release builds, but often a customer who just got a build will not see these bugs if they don't use the feature the bug is in. The director knows this is a problem as the company is starting to grow all of a sudden with some big clients coming on board and more smaller ones. I have been asked to look at source control options in order to eliminate deploying of buggy or unfinished code but to not sacrifice the somewhat asyncronous nature of the teams releases. I have used VSS, TFS, SVN and Bazaar in my career, but TFS is where most of my experience has been. Previously most teams I have worked with use a two or three branch solution of Dev-Test-Prod, where for a month developers work directly in Dev and then changes are merged to Test then Prod, or promoted "when its done" rather than on a fixed cycle. Automated builds were used, using either Cruise Control or Team Build. In my previous job Bazaar was used sitting on top of SVN: devs worked in their own small feature branches then pushed their changes to SVN (which was tied into TeamCity). This was nice in that it was easy to isolate changes and share them with other peoples branches. With both of these models there was a central dev and prod (and sometimes test) branch through which code was pushed (and labels were used to mark builds in prod from which releases were made...and these were made into branches for bug fixes to releases and merged back to dev). This doesn't really suit the way of working here, however: there is no order to when various features will be released, they get pushed when they are complete. With this requirement the "continuous integration" approach as I see it breaks down. To get a new feature out with continuous integration it has to be pushed via dev-test-prod and that will capture any unfinished work in dev. I am thinking that to overcome this we should go down a heavily feature branched model with NO dev-test-prod branches, rather the source should exist as a series of feature branches which when development work is complete are locked, tested, fixed, locked, tested and then released. Other feature branches can grab changes from other branches when they need/want, so eventually all changes get absorbed into everyone elses. This fits very much down a pure Bazaar model from what I experienced at my last job. As flexible as this sounds it just seems odd to not have a dev trunk or prod branch somewhere, and I am worried about branches forking never to re-integrate, or small late changes made that never get pulled across to other branches and developers complaining about merge disasters... What are peoples thoughts on this? A second final question: I am somewhat confused about the exact definition of distributed source control: some people seem to suggest it is about just not having a central repository like TFS or SVN, some say it is about being disconnected (SVN is 90% disconnected and TFS has a perfectly functional offline mode) and others say it is about Feature Branching and ease of merging between branches with no parent-child relationship (TFS also has baseless merging!). Perhaps this is a second question!

    Read the article

  • Open source projects, how to choose?!

    - by Dhaivat Pandya
    I would like to join an open source project since I think I am good enough at programming to progress onto reading others code and to modify it. But the proble mis, how would I choose an open source project to work on? I know many languages and chief ones that I am good are python, C++ (not really very good at C, the lack of object orientation is difficult for me) and Java. For c++, I am proficient wit Qt. I would like to start with something that isn't huge, and hasn't reached a phase where the bugs are so complicated it would take me a month to understand what affects the bug. Any suggestions? At the current time, I don't use any libraries in either of the mentioned libraries that I would need to modify (AFAIK).

    Read the article

  • Using an open source non-free license

    - by wagglepoons
    Are there any projects/products out there that use an open source license that basically says "free for small companies" and "cost money for larger companies" in addition to "make modifications available"? (And are there any standard licenses with such a wording?) If I were to release a project under such a license, would it be automatically shunned by every developer on the face of the earth, or, assuming it is actually a useful project, does it have a fair chance at getting contributions from Joe Programmer? The second part of this question can easily become subjective, but any well argued point of view will be highly appreciated. For example, do dual licensed projects made by commercial entities have success with the open source communities?

    Read the article

  • How to review the current state of open source vs. closed source graphics drivers?

    - by Bucic
    How to know whether it's worth it to replace open source drivers installed by default with proprietary ones. Are there any benchmarks? Major known issues summaries. I don't mean 'at the time of writing this post'. I mean an up-to-date status on how the drivers compare. This page https://help.ubuntu.com/community/BinaryDriverHowto/ certainly doesn't do much on the matter, nor it even mentions Intel. EDIT: I've just learned there is no Intel proprietary driver because they made their drivers open source http://askubuntu.com/a/17395/29347

    Read the article

  • Incorporating GPL Code in my Open Source Project

    - by rutherford
    I have downloaded a currently inactive GPL project with a view to updating it and releasing the completed codebase as open source. I'm not really a fan of GPL though and would rather licence my project under BSD. What are my options? Is it just a case of keeping any existing non-touched code under the GPL and any updated stuff can be BSD (messy)? The source will essentially be the same codebase i.e. there is no logical separation between the two and they certainly can't be split into anything resembling different libraries. Are my only realistic options to either GPL the whole thing or seek the original author's permission to release everthing under BSD?

    Read the article

  • Building SANE from git-source produce backend missmatch on 12.04 even if built locally

    - by deinonychusaur
    It seems to me that with Ubuntu Precise Pangolin it is all but easy to do a proper install of SANE from source (git-repo). I've found other scanning issues trying to find an answer to this, where the output people posted seems to indicate they suffer the same issue (unknowingly). If I run on a fresh install of Ubuntu 12.04 with compiled SANE source from the git I get: $ scanimage -V scanimage (sane-backends) 1.0.24git; backend version 1.0.22 (I basically followed the instructions on http://ubuntuportal.com/2012/02/how-to-get-an-canon-canoscan-lide-100-scanner-to-work-in-ubuntu-11-10linux-mint-12.html since I didn't find any other information making sure that sane was not installed prior to installation.) My primary interest is the epson2-backend. In 1.0.22 it offers the wrong TPU settings for Epson V700 (TPU2-mode wasn't supported in 1.0.22, and the scanner is useless to me if I don't have the TPU2-support). Since if I ask it to enter transparency mode, it shows 1.0.22 behaviour, it implies that the epson2-backend comes from 1.0.22 and not 1.0.24 even though I just built it. If I install SANE with prefix to a local folder and run that version of scanimage it still produces the mismatch. However, on another computer where I installed a custom 1.0.22 build of SANE prior to upgrading to Ubuntu 12.04, I can build and install the same SANE-git locally and have it correctly match backends: $ ./SANE/bin/scanimage -V scanimage (sane-backends) 1.0.24git; backend version 1.0.24 $ scanimage -V scanimage (sane-backends) 1.0.22; backend version 1.0.22 On this computer the 1.0.24 works correctly in finding TPU2 on Epson V700. So what am I missing/doing wrong? (And I want to replace 1.0.22 with 1.0.24 for the whole system, the local build was just debugging.) Any help would be much appreciated. Edit 1: Just tried compiling SANE using this instruction on Ubuntu 10.04 and it worked like a charm. However, when I upgraded to 12.04 (really would like to run 12.04), SANE was downgraded to 1.0.22. When trying the same set of instructions on 12.04 I was still out of luck -- the backend missmatch was there again (and I do have libusb-dev installed) Edit 2: I updated to Ubuntu 12.10 which now has the 1.0.23 SANE drivers. I haven't dared trying to compile from source on 12.10 since 1.0.23 is good enough for me. This is just a work-around and I would still like to know what's up with Ubuntu 12.04.

    Read the article

  • how to contribute the same source code to two separate open-source projects?

    - by Jason S
    Let's say there are two similar open source projects A and B, both licensed under the Apache Software License 2.0. I would like to contribute an improvement to both projects (because I don't know which one is administered better, and I would like to see my improvement show up in both). Is there a way I can contribute this improvement to both projects in a simple way? (One obvious approach is to start an open source project C licensed under Apache 2.0, but that's a headache for various reasons; I don't want to maintain a project myself)

    Read the article

  • how to start fixing bugs in open source softwares

    - by suryak
    I a student and have good knowledge in C programming and like to contribute any open source project which is developed in C. I searched sourceforge and selected 7-Zip because its widely used one and developed using C. I thought to start first by fixing bugs (which was suggested by many people in their websites) and gone through few bugs but couldn't understand how to respond to them and how to start fixing them.. I didn't understand anything. Could you please explain how to approach this.. I have even gone through some files in the source code which I downloaded but didn't understood anything. Please help me!

    Read the article

  • Anti Cloud Open Source License

    - by Steve
    I'm working on a browser based open source monitoring project that I want to be free to the community. What I'm worried about is someone taking the project, renaming it, deploying it in the cloud and start charging people who don't even know my project exists. I know I maybe shouldn't mind, but it just sticks in my throat a bit if someone took a free ride like that and contributed nothing back. Is there any common open source license that can prevent this. I know GPL or AGPL don't.

    Read the article

  • Open source management game in java

    - by jcw
    I am trying to find an open source sport management game, much like the link below, but am failing to do so. There are two links provided in the below question that are both fine,'except for one minor problem - I only know java! Is there an open source sports manager project? After some googling, I have been unsuccessful in finding a sports management game that is written in java. I am do not particullarly care about the type of sport, becuase I am mostly interested in mechanics. Does anyone know of any such projects or am I out of luck on java?

    Read the article

  • Open source login solution

    - by David
    Authentication is such a general problem, which most websites have to implement. There are a few commercial solutions, but all lack sufficient functionality to customize the registration process. Therefore, I am looking for an open-source alternative. I am using PHP and with PostgreSQL as database, but as far as I understand one could utilize authentication solutions using other technologies and integrate them into our site in various ways. Therefore, I am looking for such solutions in any technology apart from those requiring Microsoft infrastructure... I would prefer Open Source solution, which have already implemented the following features: Has password recovery procedure Username is the email address of the user Has "Remember me" functionailty (meaning that the user is logged in automatically without seeing the login page) email address verification Google has gotten me nowhere on this and neither a search on this site...

    Read the article

  • Secure Open Source?

    - by opatachibueze
    I want to make a delicate application of mine (an antivirus actually) open source but I want to have a control on who really obtains the source or not. Preferably they should apply and I or administrators approve their applications. Is there any online platform for this? The main reason for the control/security is to possibly prevent malware makers to easily discover how to bypass the stealth checking methods it utilizes for malware detection. Edit: I am looking for advice - possibly to hear from someone who has done something similar. Thanks!

    Read the article

  • How do open-source projects grow?

    - by dan_waterworth
    I know of lots of software that is open-source. For at least some of it, someone, somewhere must have written the first version alone. How does good open-source software become well known? I'm most interested in the first steps. How does software written by one person gain its first new contributors? I'm looking for practical advise. I've started a project here, called aodbm. What steps can I take to give it the best possible start?

    Read the article

  • Preparing to release code as open-source

    - by Raphael
    I have developed a fully functional tool which I would like not only to share with anyone interested but also get support from the community. This tool is cross-platform, written in C++ with Qt, the code is well commented but I still lack any documentation. There are also some small issues and improvements to be made before I can call it a stable, final version. What are the first steps that I have to take to release code as open-source and attracting people interested in contributing? This is my first serious attempt to release open-source code and I really don't know where to start. Should I just push it to Github put together a small wiki and pray for the best?

    Read the article

  • Removing a file from TortoiseHG data source

    - by Hossein Margani
    Hi! I am using TortoiseHG for source code control in Windows, I forgot to edit the ".hgignor" file, and now I have a huge folder ".hg" which I know it's because of DLL and EXE and PDB files which I do not need them. Now changing the ignor file does not remove those files. What should I do for deleting these files completely from my TortoiseHg data source? Thank you.

    Read the article

  • European Interoperability Framework - a new beginning?

    - by trond-arne.undheim
    The most controversial document in the history of the European Commission's IT policy is out. EIF is here, wrapped in the Communication "Towards interoperability for European public services", and including the new feature European Interoperability Strategy (EIS), arguably a higher strategic take on the same topic. Leaving EIS aside for a moment, the EIF controversy has been around IPR, defining open standards and about the proper terminology around standardization deliverables. Today, as the document finally emerges, what is the verdict? First of all, to be fair to those among you who do not spend your lives in the intricate labyrinths of Commission IT policy documents on interoperability, let's define what we are talking about. According to the Communication: "An interoperability framework is an agreed approach to interoperability for organisations that want to collaborate to provide joint delivery of public services. Within its scope of applicability, it specifies common elements such as vocabulary, concepts, principles, policies, guidelines, recommendations, standards, specifications and practices." The Good - EIF reconfirms that "The Digital Agenda can only take off if interoperability based on standards and open platforms is ensured" and also confirms that "The positive effect of open specifications is also demonstrated by the Internet ecosystem." - EIF takes a productive and pragmatic stance on openness: "In the context of the EIF, openness is the willingness of persons, organisations or other members of a community of interest to share knowledge and stimulate debate within that community, the ultimate goal being to advance knowledge and the use of this knowledge to solve problems" (p.11). "If the openness principle is applied in full: - All stakeholders have the same possibility of contributing to the development of the specification and public review is part of the decision-making process; - The specification is available for everybody to study; - Intellectual property rights related to the specification are licensed on FRAND terms or on a royalty-free basis in a way that allows implementation in both proprietary and open source software" (p. 26). - EIF is a formal Commission document. The former EIF 1.0 was a semi-formal deliverable from the PEGSCO, a working group of Member State representatives. - EIF tackles interoperability head-on and takes a clear stance: "Recommendation 22. When establishing European public services, public administrations should prefer open specifications, taking due account of the coverage of functional needs, maturity and market support." - The Commission will continue to support the National Interoperability Framework Observatory (NIFO), reconfirming the importance of coordinating such approaches across borders. - The Commission will align its internal interoperability strategy with the EIS through the eCommission initiative. - One cannot stress the importance of using open standards enough, whether in the context of open source or non-open source software. The EIF seems to have picked up on this fact: What does the EIF says about the relation between open specifications and open source software? The EIF introduces, as one of the characteristics of an open specification, the requirement that IPRs related to the specification have to be licensed on FRAND terms or on a royalty-free basis in a way that allows implementation in both proprietary and open source software. In this way, companies working under various business models can compete on an equal footing when providing solutions to public administrations while administrations that implement the standard in their own software (software that they own) can share such software with others under an open source licence if they so decide. - EIF is now among the center pieces of the Digital Agenda (even though this demands extensive inter-agency coordination in the Commission): "The EIS and the EIF will be maintained under the ISA Programme and kept in line with the results of other relevant Digital Agenda actions on interoperability and standards such as the ones on the reform of rules on implementation of ICT standards in Europe to allow use of certain ICT fora and consortia standards, on issuing guidelines on essential intellectual property rights and licensing conditions in standard-setting, including for ex-ante disclosure, and on providing guidance on the link between ICT standardisation and public procurement to help public authorities to use standards to promote efficiency and reduce lock-in.(Communication, p.7)" All in all, quite a few good things have happened to the document in the two years it has been on the shelf or was being re-written, depending on your perspective, in any case, awaiting the storms to calm. The Bad - While a certain pragmatism is required, and governments cannot migrate to full openness overnight, EIF gives a bit too much room for governments not to apply the openness principle in full. Plenty of reasons are given, which should maybe have been put as challenges to be overcome: "However, public administrations may decide to use less open specifications, if open specifications do not exist or do not meet functional interoperability needs. In all cases, specifications should be mature and sufficiently supported by the market, except if used in the context of creating innovative solutions". - EIF does not use the internationally established terminology: open standards. Rather, the EIF introduces the notion of "formalised specification". How do "formalised specifications" relate to "standards"? According to the FAQ provided: The word "standard" has a specific meaning in Europe as defined by Directive 98/34/EC. Only technical specifications approved by a recognised standardisation body can be called a standard. Many ICT systems rely on the use of specifications developed by other organisations such as a forum or consortium. The EIF introduces the notion of "formalised specification", which is either a standard pursuant to Directive 98/34/EC or a specification established by ICT fora and consortia. The term "open specification" used in the EIF, on the one hand, avoids terminological confusion with the Directive and, on the other, states the main features that comply with the basic principle of openness laid down in the EIF for European Public Services. Well, this may be somewhat true, but in reality, Europe is 30 year behind in terminology. Unless the European Standardization Reform gets completed in the next few months, most Member States will likely conclude that they will go on referencing and using standards beyond those created by the three European endorsed monopolists of standardization, CEN, CENELEC and ETSI. Who can afford to begin following the strict Brussels rules for what they can call open standards when, in reality, standards stemming from global standardization organizations, so-called fora/consortia, dominate in the IT industry. What exactly is EIF saying? Does it encourage Member States to go on using non-ESO standards as long as they call it something else? I guess I am all for it, although it is a bit cumbersome, no? Why was there so much interest around the EIF? The FAQ attempts to explain: Some Member States have begun to adopt policies to achieve interoperability for their public services. These actions have had a significant impact on the ecosystem built around the provision of such services, e.g. providers of ICT goods and services, standardisation bodies, industry fora and consortia, etc... The Commission identified a clear need for action at European level to ensure that actions by individual Member States would not create new electronic barriers that would hinder the development of interoperable European public services. As a result, all stakeholders involved in the delivery of electronic public services in Europe have expressed their opinions on how to increase interoperability for public services provided by the different public administrations in Europe. Well, it does not take two years to read 50 consultation documents, and the EU Standardization Reform is not yet completed, so, more pragmatically, you finally had to release the document. Ok, let's leave some of that aside because the document is out and some people are happy (and others definitely not). The Verdict Considering the controversy, the delays, the lobbying, and the interests at stake both in the EU, in Member States and among vendors large and small, this document is pretty impressive. As with a good wine that has not yet come to full maturity, let's say that it seems to be coming in in the 85-88/100 range, but only a more fine-grained analysis, enjoyment in good company, and ultimately, implementation, will tell. The European Commission has today adopted a significant interoperability initiative to encourage public administrations across the EU to maximise the social and economic potential of information and communication technologies. Today, we should rally around this achievement. Tomorrow, let's sit down and figure out what it means for the future.

    Read the article

  • Development processes, the use of version control, and unit-testing

    - by ct01
    Preface I've worked at quite a few "flat" organizations in my time. Most of the version control policy/process has been "only commit after it's been tested". We were constantly committing at each place to "trunk" (cvs/svn). The same was true with unit-testing - it's always been a "we need to do this" mentality but it never really materializes in a substantive form b/c there is no institutional knowledge base to do it - no mentorship. Version Control The emphasis for version control management at one place was a very strict protocol for commit messages (format & content). The other places let employees just do "whatever". The branching, tagging, committing, rolling back, and merging aspect of things was always ill defined and almost never used. This sort of seems to leave the version control system in the position of being a fancy file-storage mechanism with a meta-data component that never really gets accessed/utilized. (The same was true for unit testing and committing code to the source tree) Unit tests It seems there's a prevailing "we must/should do this" mentality in most places I've worked. As a policy or standard operating procedure it never gets implemented because there seems to be a very ill-defined understanding about what that means, what is going to be tested, and how to do it. Summary It seems most places I've been to think version control and unit testing is "important" b/c the trendy trade journals say it is but, if there's very little mentorship to use these tools or any real business policies, then the full power of version control/unit testing is never really expressed. So grunts, like myself, never really have a complete understanding of the point beyond that "it's a good thing" and "we should do it". Question I was wondering if there are blogs, books, white-papers, or online journals about what one could call the business process or "standard operating procedures" or uses cases for version control and unit testing? I want to know more than the trade journals tell me and get serious about doing these things. PS: @Henrik Hansen had a great comment about the lack of definition for the question. I'm not interested in a specific unit-testing/versioning product or methodology (like, XP) - my interest is more about work-flow at the individual team/developer level than evangelism. This is more-or-less a by product of the management situation I've operated under more than a lack of reading software engineering books or magazines about development processes. A lot of what I've seen/read is more marketing oriented material than any specifically enumerated description of "well, this is how our shop operates".

    Read the article

  • FileNameColumnName property, Flat File Source Adapter : SSIS Nugget

    - by jamiet
    I saw a question on MSDN’s SSIS forum the other day that went something like this: I’m loading data into a table from a flat file but I want to be able to store the name of that file as well. Is there a way of doing that? I don’t want to come across as disrespecting those who took the time to reply but there was a few answers along the lines of “loop over the files using a For Each, store the file name in a variable yadda yadda yadda” when in fact there is a much much simpler way of accomplishing this; it just happens to be a little hidden away as I shall now explain! The Flat File Source Adapter has a property called FileNameColumnName which for some reason it isn’t exposed through the Flat File Source editor, it is however exposed via the Advanced Properties: You’ll see in the screenshot above that I have set FileNameColumnName=“Filename” (it doesn’t matter what name you use, anything except a non-zero string will work). What this will do is create a new column in our dataflow called “Filename” that contains, unsurprisingly, the name of the file from which the row was sourced. All very simple. This is particularly useful if you are extracting data from multiple files using the MultiFlatFile Connection Manager as it allows you to differentiate between data from each of the files as you can see in the following screenshot: So there you have it, the FileNameColumnName property; a little known secret of SSIS. I hope it proves to be useful to someone out there. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • FileNameColumnName property, Flat File Source Adapter : SSIS Nugget

    - by jamiet
    I saw a question on MSDN’s SSIS forum the other day that went something like this: I’m loading data into a table from a flat file but I want to be able to store the name of that file as well. Is there a way of doing that? I don’t want to come across as disrespecting those who took the time to reply but there was a few answers along the lines of “loop over the files using a For Each, store the file name in a variable yadda yadda yadda” when in fact there is a much much simpler way of accomplishing this; it just happens to be a little hidden away as I shall now explain! The Flat File Source Adapter has a property called FileNameColumnName which for some reason it isn’t exposed through the Flat File Source editor, it is however exposed via the Advanced Properties: You’ll see in the screenshot above that I have set FileNameColumnName=“Filename” (it doesn’t matter what name you use, anything except a non-zero string will work). What this will do is create a new column in our dataflow called “Filename” that contains, unsurprisingly, the name of the file from which the row was sourced. All very simple. This is particularly useful if you are extracting data from multiple files using the MultiFlatFile Connection Manager as it allows you to differentiate between data from each of the files as you can see in the following screenshot: So there you have it, the FileNameColumnName property; a little known secret of SSIS. I hope it proves to be useful to someone out there. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Getting WCF Bindings and Behaviors from any config source

    - by cibrax
    The need of loading WCF bindings or behaviors from different sources such as files in a disk or databases is a common requirement when dealing with configuration either on the client side or the service side. The traditional way to accomplish this in WCF is loading everything from the standard configuration section (serviceModel section) or creating all the bindings and behaviors by hand in code. However, there is a solution in the middle that becomes handy when more flexibility is needed. This solution involves getting the configuration from any place, and use that configuration to automatically configure any existing binding or behavior instance created with code.  In order to configure a binding instance (System.ServiceModel.Channels.Binding) that you later inject in any endpoint on the client channel or the service host, you first need to get a binding configuration section from any configuration file (you can generate a temp file on the fly if you are using any other source for storing the configuration).  private BindingsSection GetBindingsSection(string path) { System.Configuration.Configuration config = System.Configuration.ConfigurationManager.OpenMappedExeConfiguration( new System.Configuration.ExeConfigurationFileMap() { ExeConfigFilename = path }, System.Configuration.ConfigurationUserLevel.None); var serviceModel = ServiceModelSectionGroup.GetSectionGroup(config); return serviceModel.Bindings; }   The BindingsSection contains a list of all the configured bindings in the serviceModel configuration section, so you can iterate through all the configured binding that get the one you need (You don’t need to have a complete serviceModel section, a section with the bindings only works).  public Binding ResolveBinding(string name) { BindingsSection section = GetBindingsSection(path); foreach (var bindingCollection in section.BindingCollections) { if (bindingCollection.ConfiguredBindings.Count > 0 && bindingCollection.ConfiguredBindings[0].Name == name) { var bindingElement = bindingCollection.ConfiguredBindings[0]; var binding = (Binding)Activator.CreateInstance(bindingCollection.BindingType); binding.Name = bindingElement.Name; bindingElement.ApplyConfiguration(binding); return binding; } } return null; }   The code above does just that, and also instantiates and configures the Binding object (System.ServiceModel.Channels.Binding) you are looking for. As you can see, the binding configuration element contains a method “ApplyConfiguration” that receives the binding instance that needs to be configured. A similar thing can be done for instance with the “Endpoint” behaviors. You first get the BehaviorsSection, and then, the behavior you want to use.  private BehaviorsSection GetBehaviorsSection(string path) { System.Configuration.Configuration config = System.Configuration.ConfigurationManager.OpenMappedExeConfiguration( new System.Configuration.ExeConfigurationFileMap() { ExeConfigFilename = path }, System.Configuration.ConfigurationUserLevel.None); var serviceModel = ServiceModelSectionGroup.GetSectionGroup(config); return serviceModel.Behaviors; }public List<IEndpointBehavior> ResolveEndpointBehavior(string name) { BehaviorsSection section = GetBehaviorsSection(path); List<IEndpointBehavior> endpointBehaviors = new List<IEndpointBehavior>(); if (section.EndpointBehaviors.Count > 0 && section.EndpointBehaviors[0].Name == name) { var behaviorCollectionElement = section.EndpointBehaviors[0]; foreach (BehaviorExtensionElement behaviorExtension in behaviorCollectionElement) { object extension = behaviorExtension.GetType().InvokeMember("CreateBehavior", BindingFlags.InvokeMethod | BindingFlags.NonPublic | BindingFlags.Instance, null, behaviorExtension, null); endpointBehaviors.Add((IEndpointBehavior)extension); } return endpointBehaviors; } return null; }   In this case, the code for creating the behavior instance is more tricky. First of all, a behavior in the configuration section actually represents a set of “IEndpoint” behaviors, and the behavior element you get from the configuration does not have any public method to configure an existing behavior instance. This last one only contains a protected method “CreateBehavior” that you can use for that purpose. Once you get this code implemented, a client channel can be easily configured as follows  var binding = resolver.ResolveBinding("MyBinding"); var behaviors = resolver.ResolveEndpointBehavior("MyBehavior"); SampleServiceClient client = new SampleServiceClient(binding, new EndpointAddress(new Uri("http://localhost:13749/SampleService.svc"), new DnsEndpointIdentity("localhost"))); foreach (var behavior in behaviors) { if(client.Endpoint.Behaviors.Contains(behavior.GetType())) { client.Endpoint.Behaviors.Remove(behavior.GetType()); } client.Endpoint.Behaviors.Add(behavior); }   The code above assumes that a configuration file (in any place) with a binding “MyBinding” and a behavior “MyBehavior” exists. That file can look like this,  <system.serviceModel> <bindings> <basicHttpBinding> <binding name="MyBinding"> <security mode="Transport"></security> </binding> </basicHttpBinding> </bindings> <behaviors> <endpointBehaviors> <behavior name="MyBehavior"> <clientCredentials> <windows/> </clientCredentials> </behavior> </endpointBehaviors> </behaviors> </system.serviceModel>   The same thing can be done of course in the service host if you want to manually configure the bindings and behaviors.  

    Read the article

  • Understanding IIS Bindings

    - by OWScott
    Internet Information Services (IIS) uses 4 decision points for the site bindings.  They are the protocol, port, IP and host header.  This video lesson walks through the bindings and shows how each one is used. This is part 5 of a 52 week series on various topics for the Web Administrator. Other weeks include: Week 1 – Ping and Tracert Week 2 – Understanding DNS zone records Week 3 – Nslookup – the Ultimate DNS Troubleshooting Tool Week 4 – Three Tricks for Capturing Command Line Output Understanding IIS Bindings

    Read the article

  • Open source vs commercial game engines

    - by Vanangamudi
    How commercial game accomplsih stunnning graphics with smooth game play? I am a huge die hard fan and follower of GNU Stallman and his philosophies and other Libre people Cmon how wud I miss Linus. but I got to admit commercial games does excellent jobs. One such good example is Assasin's Creed from Ubisoft. It has good quality graphcis and plays smoothly in my Dual core CPU with Nvidia Geforce 8400ES. Rockstar GTA4 has awesome graphcis but it's slower than AC considering the graphics quality tradeoff. Age of Empires from Ensemble studios, does include Massive crowd AI simulation, yet it plays so smoothly with eyecandy graphics and very large weapon sets and different techtree elements on the other hand. Open source games like Glest, 0A.D(still in alpha :) are not so smooth even though they have very restricted abilities? Coming to question: how do game companies achieve such optmizations, or the open source community is not doing optimizations, or there are any propriarity technological elements that benefits only the companies exists huh?? e.g the OpenSubDiv from Pixar just released open to community?? something like that. and why it is hard to implement optimizations? are there any legal restrictions???

    Read the article

  • Reading source code to learn

    - by perl.j
    As you develop as a programmer, IMO, you begin to see different practices, different Algorithms, and "more than one way to do it". Seeing this code can be a great learning experience for you, even though you did not write the code. But is doing this only going to confuse you? For example, let's say you have a library in any language that was created by a colleague, and you have been using it for a while. You decide to look at the actual source code, regardless of how extensive it is, and get a better look at how this library is written. For the sake of example, the function you use most often from this library is the max function, which finds the largest of two numbers. But this function is a lot more complicated than it needs to be. The way it is written is confusing the heck out of you, and you don't know how this works. Will this make you a better programmer, because you realize how complicated it is for such a simple function, or will it make you a worse coder because you feel less confidant? So my question, in general, is does reading source code make you a better programmer and if so how? If not why do people still do it?.

    Read the article

  • Is there a Source Insight alternative?

    - by hansioux
    I am not a developer, but for my work I trace a lot of codes. It is actually rather difficult reading other people's code, especially for bigger projects. Source Insight is a great application that stores all the symbols in a data base, so you can see a new function being called, click on it and see how the function is written. You can see all the referrer of a object or jump to a caller. You don't need to break the train of thought and think up shell commands just to find these things every time you ran into a new variable/structure/function from some other files. I have it running on WINE, but there are little glitches that sometimes gets in the way. I know people will mention C-scope, I've tried it, but it really isn't the same. So, with so many huge open source projects out there for Ubuntu, are there native tools to help read them efficiently? EDIT: Thanks for the suggestions, but does CODE::BLOCKS or CodeLite provide abilities to see the function that the mouse clicked on without jumping to it, so I can see the caller and callee at the same time?

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >