Search Results

Search found 8840 results on 354 pages for 'drupal developers'.

Page 278/354 | < Previous Page | 274 275 276 277 278 279 280 281 282 283 284 285  | Next Page >

  • Patterns for dynamic CMS components (event driven?)

    - by CitrusTree
    Sorry my title is not great, this is my first real punt at moving 100% to OO as I've been procedural for more years than I can remember. I'm finding it hard to understand if what I'm trying to do is possible. Depending on people's thoughts on the 2 following points, I'll go down that route. The CMS I'm putting together is quote small, however focuses very much on different types of content. I could easily use Drupal which I'm very comfortable with, but I want to give myself a really good reasons to move myself into design patterns / OO-PHP 1) I have created a base 'content' class which I wish to be able to extend to handle different types of content. The base class, for example, handles HTML content, and extensions might handle XML or PDF output instead. On the other hand, at some point I may wish to extend the base class for a given project completely. I.e. if class 'content-v2' extended class 'content' for that site, any calls to that class should actually call 'content-v2' instead. Is that possible? If the code instantiates an object of type 'content' - I actually want it to instantiate one of type 'content-v2'... I can see how to do it using inheritance, but that appears to involve referring to the class explicitly, I can't see how to link the class I want it to use instead dynamically. 2) Secondly, the way I'm building this at the moment is horrible, I'm not happy with it. It feels very linear indeed - i.e. get session details get content build navigation theme page publish. To do this all the objects are called 1-by-1 which is all very static. I'd like it to be more dynamic so that I can add to it at a later date (very closely related to first question). Is there a way that instead of my orchestrator class calling all the other classes 1-by-1, then building the whole thing up at the end, that instead each of the other classes can 'listen' for specific events, then at the applicable point jump in and do their but? That way the orchestrator class would not need to know what other classes were required, and call them 1-by-1. Sorry if I've got this all twisted in my head. I'm trying to build this so it's really flexible.

    Read the article

  • Code reviews for larger ASP.NET MVC team using TFS

    - by Parrots
    I'm trying to find a good code review workflow for my team. Most questions similar to this on SO revolve around using shelved changes for the review, however I'm curious about how this works for people with larger teams. We usually have 2-3 people working a story (UI person, Domain/Repository person, sometimes DB person). I've recommended the shelf idea but we're all concerned about how to manage that with multiple people working the same feature. How could you share a shelf between multiple programmers at that point? We worry it would be clunky and we might easily have unintended consequences moving to this workflow. Of course moving to shelfs for each feature avoids having 10 or so checkins per feature (as developers need to share code) making seeing the diffs at code review time painful. Has anyone else been able to successfully deal with this? Are there any tools out there people have found useful aside from shelfs in TFS (preferably open-source)?

    Read the article

  • Ways to make your WCF services compatible with non-.NET consumers

    - by Mayo
    I'm working on adding a WCF services layer to my existing .NET application. This layer will be hosted in IIS and will be consumed by a variety of UIs, at least one of which will not use Microsoft technologies. I can make a Web service in WCF that is consumed by my .NET application. However, I'm concerned about things that work in the .NET world but not with other technologies. For example, simply throwing an exception from my WCF service works fine in .NET. But according to this article, one should approach exception handling with fault contracts to ensure compatibility with non-.NET consumers. The author labels this lack of foresight as The Fallacy of the .NET-Only World. Does anyone have any high level suggestions or links to articles that cover interoperability between WCF and non-.NET consumers? I realize I'm potentially working against the YAGNI principle. I'm only really looking to avoid things that will be incredibly difficult to overcome later when the developers of the non-.NET consumer report problems to me.

    Read the article

  • Pros and Cons of Proprietary Software

    - by Jon Purdy
    Proprietary software is about as good as open-source software. There are so many problems with proprietary technologies, however, that I'm beginning to think it's best to avoid them: The software will only be maintained as long as the company exists (and profits). The level of security of the application is as unknowable as the source code. Alterations and derivative works, however necessary and beneficial, are disallowed. I simply don't see any point in even learning to use such systems as those created by Microsoft and Apple. Of course I don't pretend that ignorance is the superior option: one has to have a certain working knowledge simply because of the ubiquity of these things. I just don't see any reason why, as an independent developer, I should ever consider it a remotely good idea to actually use them. So that's the question, or discussion topic, or what have you: In what ways do developers benefit at all from using closed-source development software?

    Read the article

  • Do you use another language instead of english ?

    - by Luc M
    Duplicate Should identifiers and comments be always in English or in the native language of the application and developers? For people who are not native English speakers, which language do you use to declare variables, classes, etc. ? I had to continue a project from a Spanish guy. Everything was written in Spanish. Since this time, I have decided to use English identifiers ( variables, classes, file names) and write comments in french. Everything was in french before that. What are the general recommendations about that practice? Do you use English everywhere knowing that no English people will work on your project ? Edit : Here's a post from Jeff Atwood about this subject: The Ugly American Programmer

    Read the article

  • What's the difference between /123 and /?123?

    - by BoltClock
    I've noticed that some sites (including http://jobs.stackoverflow.com) have query strings that look like this: http://somewebapp.example/?123 as compared to: http://somewebapp.example/123 or http://somewebapp.example/id/123 What are the reasons that developers choose to implement their web apps' URLs using the first example instead of the second and third examples? And as a bonus, how would one implement the first example in PHP, given that 123 is the primary key of some row in a database table? (I just need to know how to retrieve 123 from the URL; I already know how to query the database for a primary key of 123.)

    Read the article

  • Is there a "dual user check-in" source control system?

    - by Zubair
    Are there any source control systems that require another user to validate the source code "before" it can be checked-in? I want to know as this is one technique to make sure that code quality is high. Update: There has been talk of "Branches" in the answers, and while I feel branches have there place I think that branchs are something different as when a developer's code is ready to go into the main branch it "should" be checked. Most often though I see that when this happens a lead developer or whoever is responsible for the merge into the main branch/stream just puts the code into the main branch as long as it "compiles" and does no more checks than that. I want the idea of two people putting their names to the code at an early stage so that it introduces some responsibility, and also because the code is cheaper to fix early on and is also fresh in the developers mind.

    Read the article

  • Proper permission and directoy location for Git Version Control

    - by CitadelCSAlum
    I am using Git Version Control on an remote server and I have set up a repository that multiple people will be using to push/fetch from. I have put the repo under /srv/subdir/git/.git I have been experiencing problem after problem it seems like. a) Is this location suitable for handling a project that will need to be accessed/modified by multiple developers and a designer? Or is there a better location? b)Do I need to modify the permissions on the subdir/ and git/ directories in order to allow remote access? If I do what is the appropriate permissions I should allow? I know this is a faily long request/question, but unfortunately like many other topics with well covered documentation, documentation does not always cover best practices. I would appreciate anybodies advice and suggestions? Thanks

    Read the article

  • How to set up dynamically Part in MultipartRequestEntity

    - by ee_vin
    Hello, I'm using commons-httpclient-3.1 inside my android application. And I would like to know if it possible to manipulate Part (org.apache.commons.httpclient.methods.multipart.Part) dynamically? Essentially adding new FilePart and new StringPart at runtime before sending the request. Every example I've found until now suppose that you know how many fields you are dealing with. Ex: File f = new File("/path/fileToUpload.txt"); PostMethod filePost = new PostMethod("http://host/some_path"); Part[] parts = { new StringPart("param_name", "value"), new FilePart(f.getName(), f) }; filePost.setRequestEntity( new MultipartRequestEntity(parts, filePost.getParams()) ); HttpClient client = new HttpClient(); int status = client.executeMethod(filePost); code from http://hc.apache.org/httpclient-3.x/apidocs/org/apache/commons/httpclient/methods/multipart/MultipartRequestEntity.html Android specific thread: http://groups.google.com/group/android-developers/browse_thread/thread/0f9e17bbaf50c5fc Thank you

    Read the article

  • Good File Organization Suggestions for Developer

    - by Paul
    I am struggling a little with folder organization to organize the many projects that I work on. I work on OS X - right now I am using ~/Development/ as the root folder, and I have many types of projects. For example, I have my iPhone apps under ~/Development/Xcode I develop in many languages, from PHP, to Ruby, to Python, to Objective-C. So, for example, I might have a couple of open-source apps based on PHP where I am using the Zend framework. Some of these projects are for clients, others are tests/experiments when learning a new language or general experimenting. I am really interested in how other developers have organized code/projects and could pass along some advice to make it very easy to navigate through code/projects related to many languages and types of projects.

    Read the article

  • Multiple arrangements/asserts per unit test?

    - by lance
    A group of us (.NET developers) are talking unit testing. Not any one framework (we've hit on MSpec, NUint, MSTest, RhinoMocks, TypeMock, etc) -- we're just talking generally. We see lots of syntax that forces a distinct unit test per scenario, but we don't see an avenue to re-using one unit test with various inputs or scenarios. Also, we don't see an avenue to multiple asserts in a given test without an early assert's failure threatening the testing of later asserts (in the same test). Is there anything like that happening in .NET unit testing (state- or behavior-based) today?

    Read the article

  • Working effectively with unit tests / Anyone tried the in-assembly approach?

    - by CodingCrapper
    I'm trying to re-introduce unit testing into my team as our current coverage is very poor. Our system is quite large 40+ projects/assemblies. We current use a project named [SystemName].Test.csproj were all the test code is dumped and organised to represent the namespaces using folders. This approach is not very scalable and makes it difficult to find tests. I've been thinking about added a Tests folder to each project, this would put the unit tests "in the developers face" and make them easy to find. The downside is the Production release code would contain references to nunit, nmocks as well as the test code and test data.... Has anyone tried this approach? How is everyone else working with unit tests on large projects? Having a Tests project per "real" project/assembly would introduce too many new projs. Thanks in advance

    Read the article

  • Branch for each developer in GIT repo

    - by Peter
    I'd like to move my project to GitHub from local svn repository. Multiple developers are curently working on this project. I was thinking that each developer should have their own branch in which they would commit changes. When manager review their work, he will merge it into master branch. I don't want separate repository for each developer as GitHub has limited number of private repositories. Is this a good idea? What are other alternatives?

    Read the article

  • Reverse engineering a custom data file

    - by kerchingo
    At my place of work we have a legacy document management system that for various reasons is now unsupported by the developers. I have been asked to look into extracting the documents contained in this system to eventually be imported into a new 3rd party system. From tracing and process monitoring I have determined that the document images (mainly tiff files) are stored in a number of 1.5GB files. These files seem to be read from a specific offset and then written to a tmp file that is then served via a web app to the client, and then deleted. I guess I am looking for suggestions as to how I can inspect these large files that contain the tiff images, and eventually extract and write them to individual files.

    Read the article

  • Should I learn Haskell or F# if I already know OCaml?

    - by Unknown
    I am wondering if I should continue to learn OCaml or switch to F# or Haskell. Here are the criteria I am most interested in: Longevity Which language will last longer? I don't want to learn something that might be abandoned in a couple years by users and developers. Will Inria, Microsoft, University of Glasgow continue to support their respective compilers for the long run? Practicality Articles like this make me afraid to use Haskell. A hash table is the best structure for fast retrieval. Haskell proponents in there suggest using Data.Map which is a binary tree. I don't like being tied to a bulky .NET framework unless the benefits are large. I want to be able to develop more than just parsers and math programs. Well Designed I like my languages to be consistent. Please support your opinion with logical arguments and citations from articles. Thank you.

    Read the article

  • Why is 'virtual' optional for overridden methods in derived classes?

    - by squelart
    When a method is declared as virtual in a class, its overrides in derived classes are automatically considered virtual as well, and the C++ language makes this keyword virtual optional in this case: class Base { virtual void f(); }; class Derived : public Base { void f(); // 'virtual' is optional but implied. }; My question is: What is the rationale for making virtual optional? I know that it is not absolutely necessary for the compiler to be told that, but I would think that developers would benefit if such a constraint was enforced by the compiler. E.g., sometimes when I read others' code I wonder if a method is virtual and I have to track down its superclasses to determine that. And some coding standards (Google) make it a 'must' to put the virtual keyword in all subclasses.

    Read the article

  • Doctrine 1.2 Column Naming Conventions for Many To Many Relationships

    - by Alan Storm
    I'm working with an existing database schema, and trying to setup two Doctrine models with a Many to Many relationship, as described in this document When creating tables from scratch, I have no trouble getting this working. However, the existing join tables use a different naming convention that what's described in the Doctrine document. Specifically Table 1 -------------------------------------------------- table_1_id ....other columns.... Table 2 -------------------------------------------------- table_2_id ....other columns.... Join Table -------------------------------------------------- fktable1_id fktable_2_id Basically, the previous developers prefaced all forign keys with an fk. From the examples I've seen and some brief experimenting with code, it appears that Doctrine 1.2 requires that the join table use the same column names as the tables it's joining in Is my assumption correct? If so, has the situation changed in Doctrine 2? If the answers to either of the above are true, how do you configure the models so that all the columns "line up"

    Read the article

  • How to do a cost-benefit analysis for platform-level features?

    - by Callister Park
    I work on a development team that works closely with Product Managers. There is mutual agreement between the developers and Product Managers that there should be a business case behind every feature the development team builds. My question is, what is an effective way to make a business case for platform-level features that have higher up front cost but will provide long term benefits? For example, the development team would like to implement a plug-in framework. There is the higher up-front cost to implement a plug-in framework but delivering the subsequent features as plug-ins will be cheaper in the long run. This is obvious to everyone including the Product Managers. Is there a standard/simple way to express the cost-benefits? Is there a simple way to visualize it with a graph?

    Read the article

  • .NET vs Mono differences in Development

    - by jason
    I'm looking into Mono and .NET C#, we'll be needing to run the code on Linux Servers in the future when the project is developed. At this point I've been looking at ASP.NET MVC and Mono I run an ubuntu distro and want to do development for a web application, some of the other developers use windows and run other .NET items with Visual Studio. What does Mono not provide that Visual Studio does? If running this on Linux later shouldn't we use Mono Develop ? Is there some third party tools or addin's that might be an issue with Mono later?

    Read the article

  • How to copy QT folder to another folder without reinstalling it?

    - by Oleg
    I have QT installed on disc D (on Windows). And I want to move it to disc C. Is it possible to do that? If I just copy QT folder from C to D then I see lot of errors when I compile my applications that use QT. Errors are because qmake.exe contains full paths to include, bin and libs folders inside. So, when I create solution for Visual Studio 2005 using qmake - then this solution contains dependencies to old QT folder from disc D. And I found no way how to remove this dependencies without reinstalling of QT. It is not a big problem for one my single machine - I can reinstall. But I need to deliver this change then to tens and hundreds of other developers machines and I want to make it as easy as possible without need to reinstall QT.

    Read the article

  • How to create a cross-plataform application, doing the interface modules (Mac/Qt/GTK+) in a totally

    - by Somebody still uses you MS-DOS
    I'm amazed at Transmission, a BT client. It has a Mac, a GTK+, a QT, a Web Client and a CLI interface to it. I tried reading some of it's source to understand how he creates all these interfaces, but no luck. Does the developer creates them using a single ide? Or does he create the interface logic in each specific environment (specially mac), "exports" this window code and integrates with the main logic? Is it possible to create that mac interface in another OS using an IDE? How did the developers create this software with so many interfaces, in a independent way?

    Read the article

  • Is there any Android XML documentation?

    - by Eddified
    Is there any sort of xml reference? I found this which turned out to be invaluable for me http://groups.google.com/group/android-developers/msg/d334017d72909c79 but I can't figure out how I was supposed to know how to do that, had I not found that post. I know that the api reference has xml attributes listed for many of the classes... but what about xml tags? Where is it documented that I could build a shape using , , tags? I'd really like to know where I can find such documentation.

    Read the article

  • Opinions about Dabo

    - by driverate
    Has anyone used Dabo lately? How does it rate vs Boa Constructor, etc? I'm writing a new Python database app and Dabo looks promising, but what's the real-world scoop on it? Is it used by many developers? It's not talked about very much here on SO, or anywhere, as far as I can tell. I'm just a little concerned that the support community might be too small, or the possibility that writers might decide to throw in the towel. What is your assessment of Dabo?

    Read the article

  • How to handle multiple projects in a small team

    - by meo
    We just started to use scrum for our project management. We are a very small team (2 developers, 1 UI/Web-deisgner ) and we have a lot of running projects at once. How do you handle having multiple projects running at once in the scrum model? Most of the time we have a main projects and some small ones. How do you combine multiple sprints efficiently? PS: I'm not sure stackoverflow is the right place to ask this kind of question, i hope there is a scrum master out there reading this.

    Read the article

  • When should one let an application crash because of an exception in Java (design issue)?

    - by JVerstry
    In most cases, it is possible to catch exceptions in Java, even unchecked ones. But, it is not necessarily possible to do something about it (for example out of memory). For other cases, the issue I am trying to solve is a design principle one. I am trying to set-up a design principle or a set of rules indicating when one should give up on an exceptional situation, even if it is detected in time. The objective is trying to not crash the application as much as possible. Has someone already brainstormed and communicated about this? I am looking for specific generic cases and possible solutions, or thumb-rules. UPDATE Suggestions so far: Stop running if data coherency can be compromised Stop running if data can be deleted Stop running if you can't do anything about it (Out of memory...) Stop running if key service is not available or becomes unavailable and cannot be restarted If application must be stopped, degrade as gracefully as possible Use rollbacks in db transactions Log as much relevant information as you can Notify the developers

    Read the article

< Previous Page | 274 275 276 277 278 279 280 281 282 283 284 285  | Next Page >