Search Results

Search found 5545 results on 222 pages for 'future'.

Page 178/222 | < Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >

  • Benefits of migrating my work to a new web development framework?

    - by John
    When I first started programming with PHP, I was ignorant of other php frameworks (like code igniter, cake php, etc...). So I fell into the trap of re-inventing wheels, which had the benefit of being "fun" and "educational". Overtime, I discovered other open source products that I found useful, like smarty templating engine, jquery library, tcpdf library, fdf etc...so I started bundling these technologies along with things I've built over the years into a LAMP development framework to make life easier for myself. This pass year, I've been having fun developing on the code igniter framework. It does many of the things I do in my framework. Coding in CI feels natural because the MVC and ORM feels similar to the MVC and ORM of my framework. So now I'm contemplating migrating a lot of the plugins in my framework over to CI. The pros and cons I can think of for such a project are: Pros: benefit from the vast community of CI developers lots of other developers will be familiar with it better documentation Cons: I've built a lot of useful plugins against my own framework, and it will take a lot of time to move even just the essential ones at the moment, I still, work faster against my own framework than CI, just because I'm more familiar with it even if I did migrate to CI, there will always be newer and better frameworks in the near future, and i'll be contemplating this scenario again So my question is the following: perhaps I should leave my old framework as is, and for each new project I receive, I make a decision on whether the requirements are best served by developing with CI or my own framework. Is this the right approach?

    Read the article

  • Assembly unavailable after Web.config change

    - by tags2k
    I'm using a custom framework that uses reflection to do a GetTypeByName(string fullName) on the fully-qualified type name that it gets from the database, to create an instance of said type and add it to the page, resulting in a standard modular kind of thing. GetTypeByName is a utility function of mine that simply iterates through Thread.GetDomain().GetAssemblies(), then performs an assembly.GetType(fullName) to find the relevant type. Obviously this result gets cached for future reference and speed. However, I'm experiencing some issues whereby if the web.config gets updated (and, in some scarier instances if the application pool gets recycled) then it will lose all knowledge of certain assemblies, resulting in the inability to render an instance of the module type. Debugging shows that the missing assembly literally does not exist in the current thread assemblies list. To get around this I added a second check which is a bit dirty but recurses through the /bin/ directory's DLLs and checks that each one exists in the assemblies list. If it doesn't, it loads it using Assembly.Load and fixing the context issue thanks to 'Solving the Assembly Load Context Problem'. This would work, only it seems that (and I'm aware this shouldn't be possible) some projects still have access to the missing assembly, for example my actual web project rather than the framework itself - and it then complains that duplicate references have been added! Has anyone ever heard of anything like this, or have any ideas why an assembly would simply drop out of existence on a config change? Short of a solution, what is the most elegant workaround to get all the assemblies in the bin to reload? It needs to be all in one "hit" so that the site visitors don't see any difference other than a small delay, so an app_offline.htm file is out of the question. Programatically renaming a DLL in the bin and then naming it back does work, but requires "modify" permissions for the IIS user account, which is insane. Thanks for any pointers the community can gather!

    Read the article

  • Return a list of important Python modules in a script?

    - by Jono Bacon
    Hi All, I am writing a program that categorizes a list of Python files by which modules they import. As such I need to scan the collection of .py files ad return a list of which modules they import. As an example, if one of the files I import has the following lines: import os import sys, gtk I would like it to return: ["os", "sys", "gtk"] I played with modulefinder and wrote: from modulefinder import ModuleFinder finder = ModuleFinder() finder.run_script('testscript.py') print 'Loaded modules:' for name, mod in finder.modules.iteritems(): print '%s ' % name, but this returns more than just the modules used in the script. As an example in a script which merely has: import os print os.getenv('USERNAME') The modules returned from the ModuleFinder script return: tokenize heapq __future__ copy_reg sre_compile _collections cStringIO _sre functools random cPickle __builtin__ subprocess cmd gc __main__ operator array select _heapq _threading_local abc _bisect posixpath _random os2emxpath tempfile errno pprint binascii token sre_constants re _abcoll collections ntpath threading opcode _struct _warnings math shlex fcntl genericpath stat string warnings UserDict inspect repr struct sys pwd imp getopt readline copy bdb types strop _functools keyword thread StringIO bisect pickle signal traceback difflib marshal linecache itertools dummy_thread posix doctest unittest time sre_parse os pdb dis ...whereas I just want it to return 'os', as that was the module used in the script. Can anyone help me achieve this?

    Read the article

  • Database Functional Programming in Clojure

    - by Ralph
    "It is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail." - Abraham Maslow I need to write a tool to dump a large hierarchical (SQL) database to XML. The hierarchy consists of a Person table with subsidiary Address, Phone, etc. tables. I have to dump thousands of rows, so I would like to do so incrementally and not keep the whole XML file in memory. I would like to isolate non-pure function code to a small portion of the application. I am thinking that this might be a good opportunity to explore FP and concurrency in Clojure. I can also show the benefits of immutable data and multi-core utilization to my skeptical co-workers. I'm not sure how the overall architecture of the application should be. I am thinking that I can use an impure function to retrieve the database rows and return a lazy sequence that can then be processed by a pure function that returns an XML fragment. For each Person row, I can create a Future and have several processed in parallel (the output order does not matter). As each Person is processed, the task will retrieve the appropriate rows from the Address, Phone, etc. tables and generate the nested XML. I can use a a generic function to process most of the tables, relying on database meta-data to get the column information, with special functions for the few tables that need custom processing. These functions could be listed in a map(table name -> function). Am I going about this in the right way? I can easily fall back to doing it in OO using Java, but that would be no fun. BTW, are there any good books on FP patterns or architecture? I have several good books on Clojure, Scala, and F#, but although each covers the language well, none look at the "big picture" of function programming design.

    Read the article

  • Reading text files line by line, with exact offset/position reporting

    - by Benjamin Podszun
    Hi. My simple requirement: Reading a huge ( a million) line test file (For this example assume it's a CSV of some sorts) and keeping a reference to the beginning of that line for faster lookup in the future (read a line, starting at X). I tried the naive and easy way first, using a StreamWriter and accessing the underlying BaseStream.Position. Unfortunately that doesn't work as I intended: Given a file containing the following Foo Bar Baz Bla Fasel and this very simple code using (var sr = new StreamReader(@"C:\Temp\LineTest.txt")) { string line; long pos = sr.BaseStream.Position; while ((line = sr.ReadLine()) != null) { Console.Write("{0:d3} ", pos); Console.WriteLine(line); pos = sr.BaseStream.Position; } } the output is: 000 Foo 025 Bar 025 Baz 025 Bla 025 Fasel I can imagine that the stream is trying to be helpful/efficient and probably reads in (big) chunks whenever new data is necessary. For me this is bad.. The question, finally: Any way to get the (byte, char) offset while reading a file line by line without using a basic Stream and messing with \r \n \r\n and string encoding etc. manually? Not a big deal, really, I just don't like to build things that might exist already..

    Read the article

  • Two way sync with rsync

    - by mwm
    I have a folder a/ and a remote folder A/. I now run something like this on a Makefile: get-music: rsync -avzru server:/media/10001/music/ /media/Incoming/music/ put-music: rsync -avzru /media/Incoming/music/ server:/media/10001/music/ sync-music: get-music put-music when I make sync-music, it first gets all the diffs from server to local and then the opposite, sending all the diffs from local to server. This works very well only if there are just updates or new files on the future. If there are deletions, it doesn't do anything. In rsync there is --delete and --delete-after options to help accomplish what I want but thing is, it doesn't work on a 2-way-sync. If I want to delete server files on a syn, when local files have been deleted, it works, but if, for some reason (explained after) I have some files that aren't in the server but exist locally and they were deleted, I want locally to remove them and not server copied (as it happens). Thing is I have 3 machines in context: desktop notebook home-server So, sometimes, server will have files that were deleted with a notebook sync, for example and then, when I run a sync with my desktop (where the deleted server files still exist on) I want these files to be deleted and not to be copied again to the server. I guess this is only possible with a database and track of operations :P Any simple solutions? Thank you.

    Read the article

  • What is the performance penalty of XML data type in SQL Server when compared to NVARCHAR(MAX)?

    - by Piotr Owsiak
    I have a DB that is going to keep log entries. One of the columns in the log table contains serialized (to XML) objects and a guy on my team proposed to go with XML data type rather than NVARCHAR(MAX). This table will have logs kept "forever" (archiving some very old entries may be considered in the future). I'm a little worried about the CPU overhead, but I'm even more worried that DB can grow faster (FoxyBOA from the referenced question got 70% bigger DB when using XML). I have read this question http://stackoverflow.com/questions/514827/microsoft-sql-server-2005-2008-xml-vs-text-varchar-data-type and it gave me some ideas but I am particulairly interrested in clarification on whether the DB size increases or decreases. Can you please share your insight/experiences in that matter. BTW. I don't currently have any need to depend on XML features within SQL Server (there's nearly zero advantage to me in the specific case). Ocasionally log entries will be extracted, but I prefer to handle the XML using .NET (either by writing a small client or using a function defined in a .NET assembly).

    Read the article

  • UI Design, incase of numerous situations

    - by The King
    Hi... I'm creating a web form, where in there are around 12-15 Input Fields... You can have a look at the screen and The request is such that depending on the data the user selects in the Gridview and the DropDown list, the appropriate Textboxes and CheckBoxes needs to be displayed. Some times the conditions are very direct, like when the DDL value is "ABC", get only paid amount from the user. Sometime they are so complex like... IF DDL is "DEF" and Selected GPMS value is between 1000-2000, calculate the values of allowed, paid etc (using some formula) and the focus should be directed to Page No Field, leaving the other fields open incase user wants to edit... There are around 10-15 conditions like this. As this was done through agile, conditions were being added as and when, and wherever it feels appropriate (DDL on change Event, GridView on selecting change event etc... etc..) After completion, now I see the code has become a big chuck, is growing unmanageably... Now, I'm planning to clear this... From you experience, what you think is the best way to handle this. There is a possibility to add more conditions like this in future... Please let me know, incase you need more information. I'm currently developing this app in C# .Net WindowsForms Edit: Currently there are only three items (The Datagrid, the DDL, the OverrideAmt CheckBox) that change the way other fields behave... Almost all of the conditions will fall between the two situations I mentioned... Mostly they belong to "Enabling/Disabling".. "Setting of Values"... and "Changing Focus" or any combination of these.

    Read the article

  • What should the standard be for ReSTful URLS?

    - by gargantaun
    Since I can't find a chuffing job, I've been reading up on ReST and creating web services. The way I've interpreted it, the future is all about creating a web service for all your data before you build the web app. Which seems like a good idea. However, there seems to be a lot of contradictory thoughts on what the best scheme is for ReSTful URLs. Some people advocate simple pretty urls http://api.myapp.com/resource/1 In addition, some people like to add the API version to the url like so http://api.myapp.com/v1/resource/1 And to make things even more confusing, some people advocate adding the content-type to get requests http://api.myapp.com/v1/resource/1.xml http://api.myapp.com/v1/resource/1.json http://api.myapp.com/v1/resource/1.txt Whereas others think the content-type should be sent in the HTTP header. Soooooooo.... That's a lot of variation, which has left me unsure of what the best URL scheme is. I personally see the merits of the most comprehensive URL that includes a version number, resource locator and content-type, but I'm new to this so I could be wrong. On the other hand, you could argue that you should do "whatever works best for you". But that doesn't really fit with the ReST mentality as far as I can tell since the aim is to have a standard. And since a lot of you people will have more experience than me with ReST, I thought I'd ask for some guidance. So, with all that in mind... What should the standard be for ReSTful URLS?

    Read the article

  • iphone crash log with dSym not loading debug information

    - by AngeDeLaMort
    Hello, I was trying to see why my application crashed on the device (iPhone) using the dSym generated along the executable (in ad hoc), but I don't know why, there isn't any useful information. It seems that "Organizer" is able to find the appropriate dSym and translate some data into more readable one, but when it comes to my application, I just have an address. Since I know how to reproduce it, I've tried to setup my build so it can help me in the future. So, I've tried to find if I had all the proper flags set int the project build properties and everything seems fine. So after doing some research, it seems that all information are stripped during link time and the dSym seems completely useless. I've played with some flags, but nothing changed. So, is there something special to do in order to get the crash file human readable? Or is it impossible in the ad hoc setting? The closest thing near to work that I've done was to build a debug version and look up the address in it. At least it seems to give the right file. So, I made a sample app and here what I have: (the line I want is #4): Thread 0 Crashed: 0 libobjc.A.dylib 0x00003ebc objc_msgSend + 20 1 UIKit 0x0005c970 -[UIView dealloc] + 60 2 UIKit 0x0005c840 -[UIImageView dealloc] + 76 3 CoreFoundation 0x0003963a -[NSObject release] + 28 4 MyApplication 0x000046a6 0x1000 + 13990 5 UIKit 0x00069750 -[UIViewController view] + 44 6 MyApplication 0x000053fa 0x1000 + 17402 The crash is made using 2 successive releases on an object. Thanks in advance.

    Read the article

  • Handle order dependence in loops

    - by Matt
    Hey all, I'm making a templating system where I instantiate each tag using a foreach loop. The issue is that some of the tags rely on each other so, I'm wondering how to get around that ordering from the looping. Here's an example: Class A { public $width; __construct() { $this->width = $B->width; // Undefined! Or atleast not set yet.. } } Class B { public $width; __construct() { $this->width = "500px"; } __tostring() { return "Hello World!"; } } Template.php $tags = array("A", "B"); foreach ($tags as $tag) { $TagObj[$tag] = new $tag(); } echo $TagObj['A']->width; // Nadamundo! EDIT: Ok just to clarify.. My main problem is that Class A relies on Class B, but class A is instantiated before class B, so therefore width has not yet been defined in class B. I am looking for a good way to make sure all the classes are loaded for everyone allowing the interdependencies to exist. For the future, please don't consider any syntax errors.. I just made up this example on the spot. Also assume that I have access to class B from class A after class B gets instantiated. I know this has applications elsewhere and I'm sure this has been solved before, if someone could enlighten me or point me in the right direction that'd be great! Thanks! Matt Mueler

    Read the article

  • Long running operations (threads) in a web (asp.net) environment

    - by rrejc
    I have an asp.net (mvc) web site. As the part of the functions I will have to support some long running operations, for example: Initiated from user: User can upload (xml) file to the server. On the server I need to extract file, do some manipulation (insert into the db) etc... This can take from one minute to ten minutes (or even more - depends on file size). Of course I don't want to block the request when the import is running , but I want to redirect user to some progress page where he will have a chance to watch the status, errors or even cancel the import. This operation will not be frequently used, but it may happen that two users at the same time will try to import the data. It would be nice to run the imports in parallel. At the beginning I was thinking to create a new thread in the iis (controller action) and run the import in a new thread. But I am not sure if this is a good idea (to create working threads on a web server). Should I use windows services or any other approach? Initiated from system: - I will have to periodically update lucene index with the new data. - I will have to send mass emails (in the future). Should I implement this as a job in the site and run the job via Quartz.net or should I also create a windows service or something? What are the best practices when it comes to running site "jobs"? Thanks!

    Read the article

  • Does complex JOINs causes high coupling and maintenance problems ?

    - by ashkan.kh.nazary
    Our project has ~40 tables with complex relations.A colleague believes in using long join queries which enforces me to learn about tables outside of my module but I think I should not concern about tables not directly related to my module and use data access functions (written by those responsible for other modules) when I need data from them. Let me clarify: I am responsible for the ContactVendor module which enables the customers to contact the vendor and start a conversation about some specific product. Products module has it's own complex tables and relations with functions that encapsulate details (for example i18n, activation, product availability etc ...). Now I need to show the product title of some product related to some conversation between the vendor and customers. I may either write a long query that retrieves the product info along with conversation stuff in one shot (which enforces me to learn about Product tables) OR I may pass the relevant product_id to the get_product_info(int) function. First approach is obviously demanding and introduces many bad practices and things I normally consider fault in programming. The problem with the second approach seems to be the countless mini queries these access functions cause and performance loss is a concern when a loop tries to fetch product titles for 100 products using functions that each perform a separate query. So I'm stuck between "don't code to the implementation, code to interface" and performance. What is the right way of doing things ? UPDATE: I'm specially concerned about possible future modifications to those tables outside of my module. What if the Products module decided to change the way they are doing things? or for some reason modify the schema? It means some other modules would break or malfunction until the change is integrated to them. The usual ripple effect problem.

    Read the article

  • Lazy Sequences that "Look Ahead" for Project Euler Problem 14

    - by ivar
    I'm trying to solve Project Euler Problem 14 in a lazy way. Unfortunately, I may be trying to do the impossible: create a lazy sequence that is both lazy, yet also somehow 'looks ahead' for values it hasn't computed yet. The non-lazy version I wrote to test correctness was: (defn chain-length [num] (loop [len 1 n num] (cond (= n 1) len (odd? n) (recur (inc len) (+ 1 (* 3 n))) true (recur (inc len) (/ n 2))))) Which works, but is really slow. Of course I could memoize that: (def memoized-chain (memoize (fn [n] (cond (= n 1) 1 (odd? n) (+ 1 (memoized-chain (+ 1 (* 3 n)))) true (+ 1 (memoized-chain (/ n 2))))))) However, what I really wanted to do was scratch my itch for understanding the limits of lazy sequences, and write a function like this: (def lazy-chain (letfn [(chain [n] (lazy-seq (cons (if (odd? n) (+ 1 (nth lazy-chain (dec (+ 1 (* 3 n))))) (+ 1 (nth lazy-chain (dec (/ n 2))))) (chain (+ n 1)))))] (chain 1))) Pulling elements from this will cause a stack overflow for n2, which is understandable if you think about why it needs to look 'into the future' at n=3 to know the value of the tenth element in the lazy list because (+ 1 (* 3 n)) = 10. Since lazy lists have much less overhead than memoization, I would like to know if this kind of thing is possible somehow via even more delayed evaluation or queuing?

    Read the article

  • Protecting an Application's Memory From Tampering

    - by Changeling
    We are adding AES 256 bit encryption to our server and client applications for encrypting the TCP/IP traffic containing sensitive information. We will be rotating the keys daily. Because of that, the keys will be stored in memory with the applications. Key distribution process: Each server and client will have a list of initial Key Encryption Key's (KEK) by day If the client has just started up or the server has just started up, the client will request the daily key from the server using the initial key. The server will respond with the daily key, encrypted with the initial key. The daily key is a randomly generated set of alphanumeric characters. We are using AES 256 bit encryption. All subsequent communications will be encrypted using that daily key. Nightly, the client will request the new daily key from the server using the current daily key as the current KEK. After the client gets the new key, the new daily key will replace the old daily key. Is it possible for another bad application to gain access to this memory illegally or is this protected in Windows? The key will not be written to a file, only stored in a variable in memory. If an application can access the memory illegally, how can you protect the memory from tampering? We are using C++ and XP (Vista/7 may be an option in the future so I don't know if that changes the answer).

    Read the article

  • Should I use concrete Inheritance or not?

    - by Mez
    I have a project using Propel where I have three objects (potentially more in the future) Occasion Event extends Occasion Gig extends Occasion Occasion is an item that has the shared things, that will always be needed (Venue, start, end etc) With this - I want to be able to add in extra functionality, say for example, adding "Band" objects to the Gig object, or "Flyers" to an "Event" object. For this, I plan to create objects for these. However, without concrete inheritance, I have to have the foreign key point to the Occasion object - giving the (propel generated) functions for all of these extra bits to anything inherited from Occasion. I could, in theory do this without a foreign constraint, and add in functions to use the Peer or Query classes to get things related to the "Gig" or similar. Whereas with concrete inheritance, I would only have these functions in the things where they are. I think the decision here is whether I should Duck Type the objects (after all they are occasions) or whether I should just use the "Occasion" object as a "template" (only being used to search for things, like, all occasions at a venue) Thoughts? Comments?

    Read the article

  • Add Keyboard Binding To Existing Emacs Mode

    - by Sean M
    I'm attempting my first modification of emacs. I recorded a little keyboard macro and had emacs spit it out as elisp, resulting in: (setq add-docstring "\C-rdef\C-n\C-a\C-m\C-p\C-i\C-u6\"\C-u3\C-b") (global-set-key "\C-c\C-d" 'add-docstring) Searching the emacs reference, though, revealed that C-c C-d is already bound in diff mode. I don't plan on using diff mode, but the future is unknowable and I'd like to not lay a trap for myself. So I'd like this keybinding to only operate in python mode, where it tries to help me add docstrings. In my /usr/share/emacs/23.whatever/list/progmodes, I found python.elc and python.el.gz. I unzipped python.el.gz and got a readable version of the elisp file. Now, though, the documentation becomes opaque to me. How can I add my key binding to the python mode, instead of globally? Is it possible, for bonus points, to apply the changes to python mode without restarting emacs or closing open files? It's the self-modifying editor, I figure there's a good chance that it's possible.

    Read the article

  • Output reformatted text within a file included in a JSP

    - by javanix
    I have a few HTML files that I'd like to include via tags in my webapp. Within some of the files, I have pseudo-dynamic code - specially formatted bits of text that, at runtime, I'd like to be resolved to their respective bits of data in a MySQL table. For instance, the HTML file might include a line that says: Welcome, [username]. I want this resolved to (via a logged-in user's data): Welcome, [email protected]. This would be simple to do in a JSP file, but requirements dictate that the files will be created by people who know basic HTML, but not JSP. Simple text-tags like this should be easy enough for me to explain to them, however. I have the code set up to do resolutions like that for strings, but can anyone think of a way to do it across files? I don't actually need to modify the file on disk - just load the content, modify it, and output it w/in the containing JSP file. I've been playing around with trying to load the files into strings via the apache readFileToString, but I can't figure out how to load files from a specific folder within the webapp's content directory without hardcoding it in and having to worry about it breaking if I deploy to a different system in the future.

    Read the article

  • Conditionally overriding a system method via categories in Objective-C?

    - by adib
    Hi Is there a way to provide a method implementation (that bears the exact same name of a method defined by the framework) only when the method isn't already defined in the system? For example method [NSSomeClass someMethod:] exists only in Mac OS X 10.6 and if my app runs in 10.5, I will provide that method's definition in a category. But when the app runs in 10.6, I want the OS-provided method to run. Background: I'm creating an app targeted for both 10.5 and 10.6. The problem is that I recently realized that method +[NSSortDescriptor sortDescriptorWithKey:ascending:] only exists in 10.6 and my code is already littered by that method call. I could provide a default implementation for it (since this time it's not too difficult to implement it myself), but I want the "native" one to be called whenever my app runs on 10.6. Furthermore if I encounter similar problems in the future (with more difficult-to-implement-myself methods), I might not be able to get away with providing a one-liner replacement. This question vaguely similar to Override a method via ObjC Category and call the default implementation? but the difference is that I want to provide implementations only when the system doesn't already has one. Thanks.

    Read the article

  • Enum and Dictionary<Enum, Action>

    - by Selcuk
    I hope I can explain my problem in a way that it's clear for everyone. We need your suggestions on this. We have an Enum Type which has more than 15 constants defined. We receive a report from a web service and translate its one column into this Enum type. And based on what we receive from that web service, we run specific functions using Dictionary Why am I asking for ideas? Let's say 3 of these Enum contants meet specific functions in our Dictionary but the rest use the same function. So, is there a way to add them into our Dictionary in a better way rather than adding them one by one? I also want to keep this structure because when it's time, we might have specific functions in the future for the ones that I described as "the rest". To be more clear here's an example what we're trying to do: Enum: public enum Reason{ ReasonA, ReasonB, ReasonC, ReasonD, ReasonE, ReasonF, ReasonG, ReasonH, ReasonI, ReasonJ, ReasonK } Defining our Dictionary: public Dictionary<Reason, Action<CustomClassObj, string>> ReasonHandlers = new Dictionary<Reason, Action<CustomClassObj, string>>{ { Reason.ReasonA, HandleReasonA }, { Reason.ReasonB, HandleReasonB }, { Reason.ReasonC, HandleReasonC }, { Reason.ReasonD, HandleReasonGeneral }, { Reason.ReasonE, HandleReasonGeneral }, { Reason.ReasonF, HandleReasonGeneral }, { Reason.ReasonG, HandleReasonGeneral }, { Reason.ReasonH, HandleReasonGeneral }, { Reason.ReasonI, HandleReasonGeneral }, { Reason.ReasonJ, HandleReasonGeneral }, { Reason.ReasonK, HandleReasonGeneral } }; So basically what I'm asking is, is there a way to add Reason, Function pair more intelligently? Because as you can see after ReasonC, all other reasons use the same function. Thank you for your suggestions.

    Read the article

  • SVN checkout or export for production environment?

    - by Eran Galperin
    In a project I am working on, we have an ongoing discussion amongst the dev team - should the production environment be deployed as a checkout from the SVN repository or as an export? The development environment is obviously a checkout, since it is constantly updated. For the production, I'm personally for checking out the main trunk, since it makes future updates easier (just run svn update). However some of the devs are against it, as svn creates files with the group/owner and permissions of the svn process (this is on a linux OS, so those things matter), and also having the .svn directories on the production seem to them to be somewhat dirty. Also, if it is a checkout - how do you push individual features to the production without including in-development code? do you use tags or branch out for each feature? any alternatives? EDIT: I might not have been clear - one of the requirement is to be able to constantly be able to push fixes to the production environment. We want to avoid a complete build (which takes much longer than a simple update) just for pushing critical fixes.

    Read the article

  • TortoiseSVN lists files as modified, but they are identical

    - by BJ Safdie
    I am merging a hot fix from our QA branch back into our Dev branch. Five files have changed. I do a fresh checkout of the Dev branch. I then do a merge (range of revisions) from QA into the Dev working copy. It brings in five files and there is a conflict on an external and ignore property -- which I resolve by "using local" (dev). When I check modifications or commit, I expect to see the five files I merged as the only changes. However, I get close to 700 "modified" files showing up in the commit dialog. If I select one of these file and "Compare with base," WinMerge comes up and says the "files are identical." I have tried this with the file dates set to "last committed" and not. Why are all of these files showing up as modified, when they are identical? What in the merge is causing this? How do I prevent SVN/TortoiseSVN from getting confused this way in the future?

    Read the article

  • Why does my .NET 4 application know .NET 4 is not installed

    - by Tergiver
    I developed an application that targeted .NET 4 the other day and XCOPY-installed it to a Windows XP machine. I had told the owner of the machine that they would need to install .NET Framework 4 to run my app and he told me he did (not a reliable source). When I ran the application I was presented with a message box that said this app requires .NET Framework 4, would I like to install it? Clicking the Yes button took me to the Microsoft web site and a few clicks later .NET 4 was installed, and the application successfully launched. Now I normally don't develop applications that target the latest version of .NET, I always target the lowest version I can (what features do I really need?). So this was my first .NET 4 app (and I only targeted 4 because it used a library that did). In the past, XCOPY-installing .NET applications to a machine that didn't have the correct version of .NET installed resulted in the application simply crashing on startup with no useful information presented to the user. Was it built into my app because I targeted .NET X? Was it something already installed on the target machine? I love the feature, I just want to know precisely how to leverage it in the future.

    Read the article

  • JUnit confusion: use 'extend Testcase' or '@Test' ?

    - by Rabarberski
    I've found the proper use (or at least the documentation) of JUnit very confusing. This question serves both as a future reference and as a real question. If I've understood correctly, there are two main approaches to create and run a JUnit test: Approach A: create a class that extends TestCase, and start test methods with the word test. When running the class as a JUnit Test (in Eclipse), all methods starting with the word test are automatically run. import junit.framework.TestCase; public class DummyTestA extends TestCase { public void testSum() { int a = 5; int b = 10; int result = a + b; assertEquals(15, result); } } Approach B: create a 'normal' class and prepend a @Test annotation to the method. Note that you do NOT have to start the method with the word test. import org.junit.*; import static org.junit.Assert.*; public class DummyTestB { @Test public void Sum() { int a = 5; int b = 10; int result = a + b; assertEquals(15, result); } } Mixing the two seems not to be a good idea, see e.g. this stackoverflow question: Now, my questions(s): What is the preferred approach, or when would you use one instead of the other? Approach B allows for testing for exceptions by extending the @Test annotation like in @Test(expected = ArithmeticException.class). But how do you test for exceptions when using approach A? When using approach A, you can group a number of test classes in a test suite. TestSuite suite = new TestSuite("All tests");<br/> suite.addTestSuite(DummyTestA.class); suite.addTestSuite(DummyTestAbis.class);` But this can't be used with approach B (since each testclass should subclass TestCase). What is the proper way to group tests for approach B?

    Read the article

  • Are mathematical Algorithms protected by copyright?

    - by analogy
    I wish to implement an algorithm which i read in a journal paper in my software (commercial). I want to know if this is allowed or not. The algorithm in question is described in http://arxiv.org/abs/0709.2938 It is a very simple algorithm and a number of implementations exist in python (http://igraph.sourceforge.net/) and java. One of them is in gpl another which i got from a different researcher and had no license attached. There are significant differences in two implementations, e.g. second one uses threads and multiple cores. It is possible to rewrite/ (not translate) the algorithm. So can I use it in my software or on a server for commercial purpose. Thanks UPDATE: I am completely aware of copyright on the text of paper, it was published in phys rev E. I am concerned with use of the algorithm, in commercial software. Also the publication means that unless the patent has been already filed. The method has been disclosed publicly hence barring patent in future. Also the GPL implementation is not by authors themselves but comes from a third party. Finally i am not using the GPL implementation but creating my own using C++.

    Read the article

< Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >