Search Results

Search found 5545 results on 222 pages for 'future'.

Page 178/222 | < Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >

  • Assembly unavailable after Web.config change

    - by tags2k
    I'm using a custom framework that uses reflection to do a GetTypeByName(string fullName) on the fully-qualified type name that it gets from the database, to create an instance of said type and add it to the page, resulting in a standard modular kind of thing. GetTypeByName is a utility function of mine that simply iterates through Thread.GetDomain().GetAssemblies(), then performs an assembly.GetType(fullName) to find the relevant type. Obviously this result gets cached for future reference and speed. However, I'm experiencing some issues whereby if the web.config gets updated (and, in some scarier instances if the application pool gets recycled) then it will lose all knowledge of certain assemblies, resulting in the inability to render an instance of the module type. Debugging shows that the missing assembly literally does not exist in the current thread assemblies list. To get around this I added a second check which is a bit dirty but recurses through the /bin/ directory's DLLs and checks that each one exists in the assemblies list. If it doesn't, it loads it using Assembly.Load and fixing the context issue thanks to 'Solving the Assembly Load Context Problem'. This would work, only it seems that (and I'm aware this shouldn't be possible) some projects still have access to the missing assembly, for example my actual web project rather than the framework itself - and it then complains that duplicate references have been added! Has anyone ever heard of anything like this, or have any ideas why an assembly would simply drop out of existence on a config change? Short of a solution, what is the most elegant workaround to get all the assemblies in the bin to reload? It needs to be all in one "hit" so that the site visitors don't see any difference other than a small delay, so an app_offline.htm file is out of the question. Programatically renaming a DLL in the bin and then naming it back does work, but requires "modify" permissions for the IIS user account, which is insane. Thanks for any pointers the community can gather!

    Read the article

  • Biztalk vs API for databroker layer

    - by jdt199
    My company is about to undergo a large project in which our client wants a large customer portal with a cms, crm implementing. This will require interaction with data from multiple sources across our customers business, these sources include XML office backend systems, sql datbases, webservices etc. Our proposed solution would be to write an API in c# to provide a common interface with all these systems. This would be scalable for future and concurrent projects within the company. Our client expressed an interest in using Biztalk rather than a custom API for this integration, as they feel it is an enterprise solution that any of their suppliers could pick up and use, and it will be better supported. We feel that the configuration work using Biztalk would be rather heavy for all their custom business rules which are required and an interface for the new application to get data to and from Biztalk would still need to be written. Are we right to prefer a custom API solution above Biztalk? Would Biztalk be suitable as a databroker layer to provide an interface for the new Customer portal we are writing. We have not experience with using Biztalk before so any input would be appreciated.

    Read the article

  • Why use Entity Framework over Linq2SQL ...

    - by Refracted Paladin
    To be clear, I am not asking for a side by side comparision which has already been asked Ad Nauseum here on SO. I am also Not asking if Linq2Sql is dead as I don't care. What I am asking is this.... I am building internal apps only for a non-profit organization. I am the only developer on staff. We ALWAYS use SQL Server as our Database backend. I design and build the Databases as well. I have used L2S successfully a couple of times already. Taking all this into consideration can someone offer me a compelling reason to use EF instead of L2S? I was at Code Camp this weekend and after an hour long demonstration on EF, all of which I could have done in L2S, I asked this same question. The speakers answer was, "L2S is dead..." Very well then! NOT! (see here) I understand EF is what MS WANTS us to use in the future(see here) and that it offers many more customization options. What I can't figure out is if any of that should, or does, matter for me in this environment. One particular issue we have here is that I inherited the Core App which was built on 4 different SQL Data bases. L2S has great difficulty with this but when I asked the aforementioned speaker if EF would help me in this regard he said "No!"

    Read the article

  • Can I use a specific model from within a behavior in CakePHP?

    - by Paul Willy
    I'm trying to write a behavior that will give my models access to a simple workflow engine I've devised. The workflow engine itself works as a CakePHP model, with workflow data stored in the database just as any other model data is stored. Basically what I want to do is have the behavior use the workflow model whenever an action is called on the base model. For example, if the edit() action is executed for Posts, then the Post (with the behavior attached) will trigger the workflow behavior with its own model name, action, and id as arguments (e.g. [Post, edit, 1]). Then the behavior will invoke the functionality of the Workflow model, which has a record for what to do when edit is run on Posts (e.g. send e-mail to users who are subscribed to that post) and will carry that out. My question is, what is the proper way to invoke model/controller methods from within the behavior? The model to be used from within the behavior will always be Workflow, but the behavior should be usable from basically any model (aside from Workflow itself). I know I could run SQL queries directly from the behavior, but of course this is not the Cake way :-) Or, am I going about this in the wrong way? I want to store a certain amount of logic in the database so that it is easily configurable by different users, and not have endless configuration checks within the model/controller logic itself so that workflow steps can be easily added/changed/removed in the future.

    Read the article

  • Two way sync with rsync

    - by mwm
    I have a folder a/ and a remote folder A/. I now run something like this on a Makefile: get-music: rsync -avzru server:/media/10001/music/ /media/Incoming/music/ put-music: rsync -avzru /media/Incoming/music/ server:/media/10001/music/ sync-music: get-music put-music when I make sync-music, it first gets all the diffs from server to local and then the opposite, sending all the diffs from local to server. This works very well only if there are just updates or new files on the future. If there are deletions, it doesn't do anything. In rsync there is --delete and --delete-after options to help accomplish what I want but thing is, it doesn't work on a 2-way-sync. If I want to delete server files on a syn, when local files have been deleted, it works, but if, for some reason (explained after) I have some files that aren't in the server but exist locally and they were deleted, I want locally to remove them and not server copied (as it happens). Thing is I have 3 machines in context: desktop notebook home-server So, sometimes, server will have files that were deleted with a notebook sync, for example and then, when I run a sync with my desktop (where the deleted server files still exist on) I want these files to be deleted and not to be copied again to the server. I guess this is only possible with a database and track of operations :P Any simple solutions? Thank you.

    Read the article

  • Add Keyboard Binding To Existing Emacs Mode

    - by Sean M
    I'm attempting my first modification of emacs. I recorded a little keyboard macro and had emacs spit it out as elisp, resulting in: (setq add-docstring "\C-rdef\C-n\C-a\C-m\C-p\C-i\C-u6\"\C-u3\C-b") (global-set-key "\C-c\C-d" 'add-docstring) Searching the emacs reference, though, revealed that C-c C-d is already bound in diff mode. I don't plan on using diff mode, but the future is unknowable and I'd like to not lay a trap for myself. So I'd like this keybinding to only operate in python mode, where it tries to help me add docstrings. In my /usr/share/emacs/23.whatever/list/progmodes, I found python.elc and python.el.gz. I unzipped python.el.gz and got a readable version of the elisp file. Now, though, the documentation becomes opaque to me. How can I add my key binding to the python mode, instead of globally? Is it possible, for bonus points, to apply the changes to python mode without restarting emacs or closing open files? It's the self-modifying editor, I figure there's a good chance that it's possible.

    Read the article

  • Reading text files line by line, with exact offset/position reporting

    - by Benjamin Podszun
    Hi. My simple requirement: Reading a huge ( a million) line test file (For this example assume it's a CSV of some sorts) and keeping a reference to the beginning of that line for faster lookup in the future (read a line, starting at X). I tried the naive and easy way first, using a StreamWriter and accessing the underlying BaseStream.Position. Unfortunately that doesn't work as I intended: Given a file containing the following Foo Bar Baz Bla Fasel and this very simple code using (var sr = new StreamReader(@"C:\Temp\LineTest.txt")) { string line; long pos = sr.BaseStream.Position; while ((line = sr.ReadLine()) != null) { Console.Write("{0:d3} ", pos); Console.WriteLine(line); pos = sr.BaseStream.Position; } } the output is: 000 Foo 025 Bar 025 Baz 025 Bla 025 Fasel I can imagine that the stream is trying to be helpful/efficient and probably reads in (big) chunks whenever new data is necessary. For me this is bad.. The question, finally: Any way to get the (byte, char) offset while reading a file line by line without using a basic Stream and messing with \r \n \r\n and string encoding etc. manually? Not a big deal, really, I just don't like to build things that might exist already..

    Read the article

  • UI Design, incase of numerous situations

    - by The King
    Hi... I'm creating a web form, where in there are around 12-15 Input Fields... You can have a look at the screen and The request is such that depending on the data the user selects in the Gridview and the DropDown list, the appropriate Textboxes and CheckBoxes needs to be displayed. Some times the conditions are very direct, like when the DDL value is "ABC", get only paid amount from the user. Sometime they are so complex like... IF DDL is "DEF" and Selected GPMS value is between 1000-2000, calculate the values of allowed, paid etc (using some formula) and the focus should be directed to Page No Field, leaving the other fields open incase user wants to edit... There are around 10-15 conditions like this. As this was done through agile, conditions were being added as and when, and wherever it feels appropriate (DDL on change Event, GridView on selecting change event etc... etc..) After completion, now I see the code has become a big chuck, is growing unmanageably... Now, I'm planning to clear this... From you experience, what you think is the best way to handle this. There is a possibility to add more conditions like this in future... Please let me know, incase you need more information. I'm currently developing this app in C# .Net WindowsForms Edit: Currently there are only three items (The Datagrid, the DDL, the OverrideAmt CheckBox) that change the way other fields behave... Almost all of the conditions will fall between the two situations I mentioned... Mostly they belong to "Enabling/Disabling".. "Setting of Values"... and "Changing Focus" or any combination of these.

    Read the article

  • What should the standard be for ReSTful URLS?

    - by gargantaun
    Since I can't find a chuffing job, I've been reading up on ReST and creating web services. The way I've interpreted it, the future is all about creating a web service for all your data before you build the web app. Which seems like a good idea. However, there seems to be a lot of contradictory thoughts on what the best scheme is for ReSTful URLs. Some people advocate simple pretty urls http://api.myapp.com/resource/1 In addition, some people like to add the API version to the url like so http://api.myapp.com/v1/resource/1 And to make things even more confusing, some people advocate adding the content-type to get requests http://api.myapp.com/v1/resource/1.xml http://api.myapp.com/v1/resource/1.json http://api.myapp.com/v1/resource/1.txt Whereas others think the content-type should be sent in the HTTP header. Soooooooo.... That's a lot of variation, which has left me unsure of what the best URL scheme is. I personally see the merits of the most comprehensive URL that includes a version number, resource locator and content-type, but I'm new to this so I could be wrong. On the other hand, you could argue that you should do "whatever works best for you". But that doesn't really fit with the ReST mentality as far as I can tell since the aim is to have a standard. And since a lot of you people will have more experience than me with ReST, I thought I'd ask for some guidance. So, with all that in mind... What should the standard be for ReSTful URLS?

    Read the article

  • Protecting an Application's Memory From Tampering

    - by Changeling
    We are adding AES 256 bit encryption to our server and client applications for encrypting the TCP/IP traffic containing sensitive information. We will be rotating the keys daily. Because of that, the keys will be stored in memory with the applications. Key distribution process: Each server and client will have a list of initial Key Encryption Key's (KEK) by day If the client has just started up or the server has just started up, the client will request the daily key from the server using the initial key. The server will respond with the daily key, encrypted with the initial key. The daily key is a randomly generated set of alphanumeric characters. We are using AES 256 bit encryption. All subsequent communications will be encrypted using that daily key. Nightly, the client will request the new daily key from the server using the current daily key as the current KEK. After the client gets the new key, the new daily key will replace the old daily key. Is it possible for another bad application to gain access to this memory illegally or is this protected in Windows? The key will not be written to a file, only stored in a variable in memory. If an application can access the memory illegally, how can you protect the memory from tampering? We are using C++ and XP (Vista/7 may be an option in the future so I don't know if that changes the answer).

    Read the article

  • Does complex JOINs causes high coupling and maintenance problems ?

    - by ashkan.kh.nazary
    Our project has ~40 tables with complex relations.A colleague believes in using long join queries which enforces me to learn about tables outside of my module but I think I should not concern about tables not directly related to my module and use data access functions (written by those responsible for other modules) when I need data from them. Let me clarify: I am responsible for the ContactVendor module which enables the customers to contact the vendor and start a conversation about some specific product. Products module has it's own complex tables and relations with functions that encapsulate details (for example i18n, activation, product availability etc ...). Now I need to show the product title of some product related to some conversation between the vendor and customers. I may either write a long query that retrieves the product info along with conversation stuff in one shot (which enforces me to learn about Product tables) OR I may pass the relevant product_id to the get_product_info(int) function. First approach is obviously demanding and introduces many bad practices and things I normally consider fault in programming. The problem with the second approach seems to be the countless mini queries these access functions cause and performance loss is a concern when a loop tries to fetch product titles for 100 products using functions that each perform a separate query. So I'm stuck between "don't code to the implementation, code to interface" and performance. What is the right way of doing things ? UPDATE: I'm specially concerned about possible future modifications to those tables outside of my module. What if the Products module decided to change the way they are doing things? or for some reason modify the schema? It means some other modules would break or malfunction until the change is integrated to them. The usual ripple effect problem.

    Read the article

  • What is the performance penalty of XML data type in SQL Server when compared to NVARCHAR(MAX)?

    - by Piotr Owsiak
    I have a DB that is going to keep log entries. One of the columns in the log table contains serialized (to XML) objects and a guy on my team proposed to go with XML data type rather than NVARCHAR(MAX). This table will have logs kept "forever" (archiving some very old entries may be considered in the future). I'm a little worried about the CPU overhead, but I'm even more worried that DB can grow faster (FoxyBOA from the referenced question got 70% bigger DB when using XML). I have read this question http://stackoverflow.com/questions/514827/microsoft-sql-server-2005-2008-xml-vs-text-varchar-data-type and it gave me some ideas but I am particulairly interrested in clarification on whether the DB size increases or decreases. Can you please share your insight/experiences in that matter. BTW. I don't currently have any need to depend on XML features within SQL Server (there's nearly zero advantage to me in the specific case). Ocasionally log entries will be extracted, but I prefer to handle the XML using .NET (either by writing a small client or using a function defined in a .NET assembly).

    Read the article

  • Should I use concrete Inheritance or not?

    - by Mez
    I have a project using Propel where I have three objects (potentially more in the future) Occasion Event extends Occasion Gig extends Occasion Occasion is an item that has the shared things, that will always be needed (Venue, start, end etc) With this - I want to be able to add in extra functionality, say for example, adding "Band" objects to the Gig object, or "Flyers" to an "Event" object. For this, I plan to create objects for these. However, without concrete inheritance, I have to have the foreign key point to the Occasion object - giving the (propel generated) functions for all of these extra bits to anything inherited from Occasion. I could, in theory do this without a foreign constraint, and add in functions to use the Peer or Query classes to get things related to the "Gig" or similar. Whereas with concrete inheritance, I would only have these functions in the things where they are. I think the decision here is whether I should Duck Type the objects (after all they are occasions) or whether I should just use the "Occasion" object as a "template" (only being used to search for things, like, all occasions at a venue) Thoughts? Comments?

    Read the article

  • php parsing speed optimization

    - by Arnaud
    I would like to add tooltip or generate link according to the element available in the database, for exemple if the html page printed is: to reboot your linux host in single-user mode you can ... I will use explode(" ", $row[page]) and the idea is now to lookup for every single word in the page to find out if they have a related referance in this exemple let's say i've got a table referance an one entry for reboot and one for linux reboot: restart a computeur linux: operating system now my output will look like (replaced < and by @) to @a href="ref/reboot"@reboot@/a@ your @a href="ref/linux"@linux@/a@ host in single-user mode you can ... Instead of have a static list generated when I saved the content, if I add more keyword in the future, then the text will become more interactive. My main concerne and question is how can I create a efficient enough process to do it ? Should I store all the db entry in an array and compare them ? Do an sql query for each word (seems to be crazy) Dump the table in a file and use a very long regex or a "grep -f pattern data" way of doing it? Or or or or I'm sure it must be a better way of doing it, just don't have a clue about it, or maybe this will be far too resource un-friendly and I should avoid doing such things. Cheers!

    Read the article

  • Lazy Sequences that "Look Ahead" for Project Euler Problem 14

    - by ivar
    I'm trying to solve Project Euler Problem 14 in a lazy way. Unfortunately, I may be trying to do the impossible: create a lazy sequence that is both lazy, yet also somehow 'looks ahead' for values it hasn't computed yet. The non-lazy version I wrote to test correctness was: (defn chain-length [num] (loop [len 1 n num] (cond (= n 1) len (odd? n) (recur (inc len) (+ 1 (* 3 n))) true (recur (inc len) (/ n 2))))) Which works, but is really slow. Of course I could memoize that: (def memoized-chain (memoize (fn [n] (cond (= n 1) 1 (odd? n) (+ 1 (memoized-chain (+ 1 (* 3 n)))) true (+ 1 (memoized-chain (/ n 2))))))) However, what I really wanted to do was scratch my itch for understanding the limits of lazy sequences, and write a function like this: (def lazy-chain (letfn [(chain [n] (lazy-seq (cons (if (odd? n) (+ 1 (nth lazy-chain (dec (+ 1 (* 3 n))))) (+ 1 (nth lazy-chain (dec (/ n 2))))) (chain (+ n 1)))))] (chain 1))) Pulling elements from this will cause a stack overflow for n2, which is understandable if you think about why it needs to look 'into the future' at n=3 to know the value of the tenth element in the lazy list because (+ 1 (* 3 n)) = 10. Since lazy lists have much less overhead than memoization, I would like to know if this kind of thing is possible somehow via even more delayed evaluation or queuing?

    Read the article

  • Long running operations (threads) in a web (asp.net) environment

    - by rrejc
    I have an asp.net (mvc) web site. As the part of the functions I will have to support some long running operations, for example: Initiated from user: User can upload (xml) file to the server. On the server I need to extract file, do some manipulation (insert into the db) etc... This can take from one minute to ten minutes (or even more - depends on file size). Of course I don't want to block the request when the import is running , but I want to redirect user to some progress page where he will have a chance to watch the status, errors or even cancel the import. This operation will not be frequently used, but it may happen that two users at the same time will try to import the data. It would be nice to run the imports in parallel. At the beginning I was thinking to create a new thread in the iis (controller action) and run the import in a new thread. But I am not sure if this is a good idea (to create working threads on a web server). Should I use windows services or any other approach? Initiated from system: - I will have to periodically update lucene index with the new data. - I will have to send mass emails (in the future). Should I implement this as a job in the site and run the job via Quartz.net or should I also create a windows service or something? What are the best practices when it comes to running site "jobs"? Thanks!

    Read the article

  • iphone crash log with dSym not loading debug information

    - by AngeDeLaMort
    Hello, I was trying to see why my application crashed on the device (iPhone) using the dSym generated along the executable (in ad hoc), but I don't know why, there isn't any useful information. It seems that "Organizer" is able to find the appropriate dSym and translate some data into more readable one, but when it comes to my application, I just have an address. Since I know how to reproduce it, I've tried to setup my build so it can help me in the future. So, I've tried to find if I had all the proper flags set int the project build properties and everything seems fine. So after doing some research, it seems that all information are stripped during link time and the dSym seems completely useless. I've played with some flags, but nothing changed. So, is there something special to do in order to get the crash file human readable? Or is it impossible in the ad hoc setting? The closest thing near to work that I've done was to build a debug version and look up the address in it. At least it seems to give the right file. So, I made a sample app and here what I have: (the line I want is #4): Thread 0 Crashed: 0 libobjc.A.dylib 0x00003ebc objc_msgSend + 20 1 UIKit 0x0005c970 -[UIView dealloc] + 60 2 UIKit 0x0005c840 -[UIImageView dealloc] + 76 3 CoreFoundation 0x0003963a -[NSObject release] + 28 4 MyApplication 0x000046a6 0x1000 + 13990 5 UIKit 0x00069750 -[UIViewController view] + 44 6 MyApplication 0x000053fa 0x1000 + 17402 The crash is made using 2 successive releases on an object. Thanks in advance.

    Read the article

  • Are there solutions for streamlining the update of legacy code in multiple places?

    - by ccomet
    I'm working in some old code which was originally designed for handling two different kinds of files. I was recently tasked with adding a new kind of file to this code. Most of my problems were solved by filling out an extensive XML file with a new entry that handled everything from what lists were named to how the file is written in plural lower case. But this ended up being insufficient, as there were maybe 50 different places in 24 different code files where I had to update hardcoded switch-statements that only branched for the original two file types. Unfortunately there is no consistency in this; there are methods which operate half from the XML file, and half off of hardcode. Some of the files which look like they would operate off of the XML file don't, and some that I would expect that I'd need to update the hardcode don't need it. So the only way to find the majority of these is to run through testing the whole system when only part of it is operational, finding that one step to fix (when I'm lucky that error logging actually tells me what is going on), and then running the whole thing again. This wastes time testing the parts of the code which are already confirmed to work, time better spent testing the new parts I have to add on top of it all. It's a hassle and a half, and to my luck I can expect that I will have to add yet another new kind of file in the near future. Are there any solutions out there which can aid in this kind of endeavour? Something which I can input some parameters of current features, document what points in a whole code project actually need to be updated, and run something nice the next time I need to add a new feature to the code. It needn't even be fully automated, something that'll help me navigate straight to the specific points in everything and maybe even record what kind of parameters need to be loaded. Doubt it matters specifically, but the code is comprised of ASP.NET pages, some ASP.NET controls, hundreds of C# code files, and a handful of additional XML files. It's all currently in a couple big Visual Studio 2008 projects.

    Read the article

  • Enum and Dictionary<Enum, Action>

    - by Selcuk
    I hope I can explain my problem in a way that it's clear for everyone. We need your suggestions on this. We have an Enum Type which has more than 15 constants defined. We receive a report from a web service and translate its one column into this Enum type. And based on what we receive from that web service, we run specific functions using Dictionary Why am I asking for ideas? Let's say 3 of these Enum contants meet specific functions in our Dictionary but the rest use the same function. So, is there a way to add them into our Dictionary in a better way rather than adding them one by one? I also want to keep this structure because when it's time, we might have specific functions in the future for the ones that I described as "the rest". To be more clear here's an example what we're trying to do: Enum: public enum Reason{ ReasonA, ReasonB, ReasonC, ReasonD, ReasonE, ReasonF, ReasonG, ReasonH, ReasonI, ReasonJ, ReasonK } Defining our Dictionary: public Dictionary<Reason, Action<CustomClassObj, string>> ReasonHandlers = new Dictionary<Reason, Action<CustomClassObj, string>>{ { Reason.ReasonA, HandleReasonA }, { Reason.ReasonB, HandleReasonB }, { Reason.ReasonC, HandleReasonC }, { Reason.ReasonD, HandleReasonGeneral }, { Reason.ReasonE, HandleReasonGeneral }, { Reason.ReasonF, HandleReasonGeneral }, { Reason.ReasonG, HandleReasonGeneral }, { Reason.ReasonH, HandleReasonGeneral }, { Reason.ReasonI, HandleReasonGeneral }, { Reason.ReasonJ, HandleReasonGeneral }, { Reason.ReasonK, HandleReasonGeneral } }; So basically what I'm asking is, is there a way to add Reason, Function pair more intelligently? Because as you can see after ReasonC, all other reasons use the same function. Thank you for your suggestions.

    Read the article

  • Conditionally overriding a system method via categories in Objective-C?

    - by adib
    Hi Is there a way to provide a method implementation (that bears the exact same name of a method defined by the framework) only when the method isn't already defined in the system? For example method [NSSomeClass someMethod:] exists only in Mac OS X 10.6 and if my app runs in 10.5, I will provide that method's definition in a category. But when the app runs in 10.6, I want the OS-provided method to run. Background: I'm creating an app targeted for both 10.5 and 10.6. The problem is that I recently realized that method +[NSSortDescriptor sortDescriptorWithKey:ascending:] only exists in 10.6 and my code is already littered by that method call. I could provide a default implementation for it (since this time it's not too difficult to implement it myself), but I want the "native" one to be called whenever my app runs on 10.6. Furthermore if I encounter similar problems in the future (with more difficult-to-implement-myself methods), I might not be able to get away with providing a one-liner replacement. This question vaguely similar to Override a method via ObjC Category and call the default implementation? but the difference is that I want to provide implementations only when the system doesn't already has one. Thanks.

    Read the article

  • Output reformatted text within a file included in a JSP

    - by javanix
    I have a few HTML files that I'd like to include via tags in my webapp. Within some of the files, I have pseudo-dynamic code - specially formatted bits of text that, at runtime, I'd like to be resolved to their respective bits of data in a MySQL table. For instance, the HTML file might include a line that says: Welcome, [username]. I want this resolved to (via a logged-in user's data): Welcome, [email protected]. This would be simple to do in a JSP file, but requirements dictate that the files will be created by people who know basic HTML, but not JSP. Simple text-tags like this should be easy enough for me to explain to them, however. I have the code set up to do resolutions like that for strings, but can anyone think of a way to do it across files? I don't actually need to modify the file on disk - just load the content, modify it, and output it w/in the containing JSP file. I've been playing around with trying to load the files into strings via the apache readFileToString, but I can't figure out how to load files from a specific folder within the webapp's content directory without hardcoding it in and having to worry about it breaking if I deploy to a different system in the future.

    Read the article

  • Why does my .NET 4 application know .NET 4 is not installed

    - by Tergiver
    I developed an application that targeted .NET 4 the other day and XCOPY-installed it to a Windows XP machine. I had told the owner of the machine that they would need to install .NET Framework 4 to run my app and he told me he did (not a reliable source). When I ran the application I was presented with a message box that said this app requires .NET Framework 4, would I like to install it? Clicking the Yes button took me to the Microsoft web site and a few clicks later .NET 4 was installed, and the application successfully launched. Now I normally don't develop applications that target the latest version of .NET, I always target the lowest version I can (what features do I really need?). So this was my first .NET 4 app (and I only targeted 4 because it used a library that did). In the past, XCOPY-installing .NET applications to a machine that didn't have the correct version of .NET installed resulted in the application simply crashing on startup with no useful information presented to the user. Was it built into my app because I targeted .NET X? Was it something already installed on the target machine? I love the feature, I just want to know precisely how to leverage it in the future.

    Read the article

  • Handle order dependence in loops

    - by Matt
    Hey all, I'm making a templating system where I instantiate each tag using a foreach loop. The issue is that some of the tags rely on each other so, I'm wondering how to get around that ordering from the looping. Here's an example: Class A { public $width; __construct() { $this->width = $B->width; // Undefined! Or atleast not set yet.. } } Class B { public $width; __construct() { $this->width = "500px"; } __tostring() { return "Hello World!"; } } Template.php $tags = array("A", "B"); foreach ($tags as $tag) { $TagObj[$tag] = new $tag(); } echo $TagObj['A']->width; // Nadamundo! EDIT: Ok just to clarify.. My main problem is that Class A relies on Class B, but class A is instantiated before class B, so therefore width has not yet been defined in class B. I am looking for a good way to make sure all the classes are loaded for everyone allowing the interdependencies to exist. For the future, please don't consider any syntax errors.. I just made up this example on the spot. Also assume that I have access to class B from class A after class B gets instantiated. I know this has applications elsewhere and I'm sure this has been solved before, if someone could enlighten me or point me in the right direction that'd be great! Thanks! Matt Mueler

    Read the article

  • TortoiseSVN lists files as modified, but they are identical

    - by BJ Safdie
    I am merging a hot fix from our QA branch back into our Dev branch. Five files have changed. I do a fresh checkout of the Dev branch. I then do a merge (range of revisions) from QA into the Dev working copy. It brings in five files and there is a conflict on an external and ignore property -- which I resolve by "using local" (dev). When I check modifications or commit, I expect to see the five files I merged as the only changes. However, I get close to 700 "modified" files showing up in the commit dialog. If I select one of these file and "Compare with base," WinMerge comes up and says the "files are identical." I have tried this with the file dates set to "last committed" and not. Why are all of these files showing up as modified, when they are identical? What in the merge is causing this? How do I prevent SVN/TortoiseSVN from getting confused this way in the future?

    Read the article

  • Is there a safe / standard way to manage unstructured memory in C++?

    - by andand
    I'm building a toy VM that requires a block of memory for storing and accessing data elements of different types and of different sizes. I've done this by writing a wrapper class around a uint8_t[] data block of the needed size. That class has some template methods to write / read typed data elements to / from arbitrary locations in the memory block, both of which check to make certain the bounds aren't violated. These methods use memmove in what I hope is a more or less safe manner. That said, while I am willing to press on in this direction, I've got to believe that other with more expertise have been here before and might be willing to share their wisdom. In particular: 1) Is there a class in one of the C++ standards (past, present, future) that has been defined to perform a function similar to what I have outlined above? 2) If not, is there a (preferably free as in beer) library out there that does? 3) Short of that, besides bounds checking and the inevitable issue of writing one type to a memory location and reading a different from that location, are there other issues I should be aware of? Thanks.-&&

    Read the article

< Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >