Search Results

Search found 29477 results on 1180 pages for 'complex script rendering'.

Page 323/1180 | < Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >

  • What are the most commonly used and basic Apache htaccess redirects?

    - by bybe
    This question is here so we can offer users who are looking for information on how to make one or more common or basic redirects in Apache using the htaccess file. All future questions pertaining to finding information that is that is covered by the question should be closed as a duplication of this question. As per this Meta question. Whats the point in this question? The idea while not perfect is catch the most commonly asked questions regarding redirects using the htaccess on the Apache platform either on some type of lamp or a live server. The type of answers should be generally those that you could imagine are used by 100,000’s of sites world-wide and are constantly asked here at Pro Webmasters repeatedly over and over in various forms. A few examples of the type of answers we are looking for: How can I redirect non-www to www? How can I redirect a sub domain to the main domain? How can I redirect a sub folder from domain to a root or a subdomain? How can I redirect an old URL to a new URL? A few examples of the types of answers that we are not looking for: Answers that do not involve a redirect. Any answers relating to NGinx, IIS or any other non-Apache platform. Answers that involve custom and complex string or query removals. Resources for Advanced to Complex Mod_Rewrite Rules: Everything you ever wanted to know about mod rewrite rules but were afraid to ask Please note: that this question is still in construction and may need some refining either by myself or a real moderator of Pro Webmasters, if you have any concerns or questions please use the meta page I made a few days back here.

    Read the article

  • Evolution of mainstream programming languages: simplicity versus complexity.

    - by Giorgio
    I had posted this question on http://stackoverflow.com but I was suggested that it may be more appropriate to post it on this forum. I did a quick search on this site and it seems to me that this question has not been asked yet. Please give me a hint if the topic has been raised already by someone else. Update I have rephrased this question, removed personal opinions and made it shorter. I hope in this way it is better suited for this forum. By looking at the recent development of Java (Java 7) and C++ (C++0x) I see that new features are added to these languages. For sure this makes it easier to use certain programming idioms, adding to the productivity of developers. On the other hand, there might be the following risks A language becomes too big, complex, and difficult to understand. It lacks coherence in the design, e.g. if it mixes different paradigms like object-orientation and functional programming, which might not fit well together. Questions: what is more important to you as a developer: to have a rich language that captures a large collection of programming idioms or to have a small language that aims at coherence and simplicity (of course, with a good deal of libraries and tools accompanying it)? Or is it possible to have both? With respect to these issues: How do you judge the current evolutions of main-stream programming languages like Java or C++? Are they becoming too complex, less intuitive? Do they have enough features? Do they need more? Are they still easy enough to understand and use?

    Read the article

  • With Choice Comes Complexity

    - by BuckWoody
    "Complex" may be defined as "Having many steps, details or parts." Many of Microsoft's products, including SQL Server, can be complex. I'm stating what most data professionals already know - there's usually multiple ways to do things in SQL Server. For instance, to import some data into a table you can use graphical tools, SQLCMD, bcp, SQL Server Integration Services, BULK INSERT, even PowerShell, just to name a few tools at your disposal. That's really not the issue, though. The bigger issue is that there are normally multiple thought-processes, or methods, that you have available for a task. That's both a strength and a weakness. If things were more simple, you would have fewer choices. Sometimes that's a good thing. Just tell me what I need to do and I'll do it. However, your particular situation may not fit that tool or process, so having more options increases your ability to get your job done the way you need to do it. On the other hand, that's more for you to learn, which is harder. There's another side of this benefit/difficulty that you need to be aware of. Even if you're quite good at what you do, keep in mind that the way you know how to do something may not be the only way to do it. Keep your mind open to new possibilities, and most importantly - to new knowledge. SQL Server professionals teach me something new every day. So embrace the complexity - on balance, it's a good thing! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Software Licencing [closed]

    - by Craig
    A colleague of mine wanted a means to do something, so it was suggested that I write some software to do this. The software has turned into more than the original specification and is now something rather complex, however it is not fully functional still. My colleague has not paid me anything so far and I am unwilling to continue writing the software until some faith has been reciprocated in my direction, as I have put a lot of effort into writing the software. I am also unwilling to finish the software as I do not want to give away a huge chunk of my time and effort away as free, neither do I want to be under compensated for my efforts. Some concerns I have. If I finish the software, what if the client doesn't pay me anything or pays too little, or what if I write the software to a usable level, but not complete and the client pays me a too little. I have lost my motivation to finish the software, as more and more specifications have been added to the software and I have developed a substantially complex piece of software and been effectively paid nothing. To finish the software, I need motivation, money would do this, however the client doesn't want to pay for something that isn't complete, yet keeps adding more requirements. I seem to be in a catch 22 with this, as I have developed some software on faith and have had no faith reciprocated in my direction. I'm really not sure how to get some payment from the client or on how to develop a licencing model so that I get some money from the client and development resumes.

    Read the article

  • TDD with SQL and data manipulation functions

    - by Xophmeister
    While I'm a professional programmer, I've never been formally trained in software engineering. As I'm frequently visiting here and SO, I've noticed a trend for writing unit tests whenever possible and, as my software gets more complex and sophisticated, I see automated testing as a good idea in aiding debugging. However, most of my work involves writing complex SQL and then processing the output in some way. How would you write a test to ensure your SQL was returning the correct data, for example? Then, say if the data wasn't under your control (e.g., that of a 3rd party system), how can you efficiently test your processing routines without having to hand write reams of dummy data? The best solution I can think of is making views of the data that, together, cover most cases. I can then join those views with my SQL to see if it's returning the correct records and manually process the views to see if my functions, etc. are doing what they're supposed to. Still, it seems excessive and flakey; particularly finding data to test against...

    Read the article

  • for an ajax heavy web application which would be better SOAP or REST?

    - by coder
    I'm building an ajax heavy application (client-side strictly html/css/js) which will be getting all the data and using server business logic via webservices. I know REST seems to be the hot topic but I can't find any good arguments. The main argument seems to be its "light-weight". My impression so far is that wsdl/soap based services are more expressive and allow for more a more complex transfer of data. It appears that soap would be more useful in the application I'm building where the only code consuming the services will be the js downloaded in the client browser. REST on the other hand seems to have a smaller entry barrier and so can be more useful for services like twitter in allowing other developers to consume these services easily. Also, REST seems to Te better suited for simple data transfers. So in summary SOAP is useful for complex data transfer and REST is useful in simple data transfer. I'm currently under the impression that using SOAP would be best due to the complexity of the messages but perhaps there's other factors. What are your thoughts on the pros/cons of soap/rest for a heavy ajax web app?

    Read the article

  • What is the value to checking in broken unit tests?

    - by Adam W.
    While there are ways of keeping unit tests from being executed, what is the value of checking in broken unit tests? I will use a simple example. Case sensitivity. The current code is Case Sensitive. A valid input into the method is "Cat" and it would return an enum of Animal.Cat. However, the desired functionality of the method should not be case sensitive. So if the method described was passed "cat" it could possibly return something like Animal.Null instead of Animal.Cat and the unit test would fail. Though a simple code change would make this work, a more complex issue may take weeks to fix, but identifying the bug with a unit test could be a less complex task. The application currently being analyzed has 4 years of code that "works". However, recent discussions regarding unit tests has found flaws in the code. Some just need explicit implementation documentation (ex. case sensitive or not), or code that does not execute the bug based on how it is currently called. But unit tests can be created executing specific scenarios that will cause the bug to be seen and are valid inputs. What is the value of checking in unit tests that exercise the bug until someone can get around to fixing the code? Should this unit test be flagged with ignore, priority, category etc, to determine whether a build was successful based on tests executed? Eventually the unit test should be created to execute the code once someone fixes it. On one hand it shows that identified bugs have not been fixed. On the other, there could be hundreds of failed unit tests showing up in the logs and weeding through the ones that should fail vs. failures due to a code check-in would be difficult to find.

    Read the article

  • Am I programming too slow?

    - by Jonn
    I've only been a year in the industry and I've had some problems making estimates for specific tasks. Before you close this, yes, I've already read this: http://programmers.stackexchange.com/questions/648/how-to-respond-when-you-are-asked-for-an-estimate and that's about the same problem I'm having. But I'm looking for a more specific gauge of experiences, something quantifiable or probably other programmer's average performances which I should aim for and base my estimates. The answers range from weeks, and I was looking more for an answer on the level of a task assigned for a day or so. (Note that this doesn't include submitting for QA or documentations, just the actual development time from writing tests if I used TDD, to making the page, before having it submitted to testing) My current rate right now is as follows (on ASP.NET webforms): Right now, I'm able to develop a simple data entry page with a grid listing (no complex logic, just Creating and Reading) on an already built architecture, given one full day's (8 hours) time. Adding complex functionality, and Update and Delete pages add another full day to the task. If I have to start the page from scratch (no solution, no existing website) it takes me another full day. (Not always) but if I encounter something new or haven't done yet it takes me another full day. Whenever I make an estimate that's longer than the expected I feel that others think that I'm lagging a lot behind everyone else. I'm just concerned as there have been expectations that when it's just one page it should take me no more than a full day. Yes, there definitely is more room for improvement. There always is. I have a lot to learn. But I would like to know if my current rate is way too slow, just average, or average for someone no longer than a year in the industry.

    Read the article

  • Are long methods always bad?

    - by wobbily_col
    So looking around earlier I noticed some comments about long methods being bad practice. I am not sure I always agree that long methods are bad (and would like opinions from others). For example I have some Django views that do a bit of processing of the objects before sending them to the view, a long method being 350 lines of code. I have my code written so that it deals with the paramaters - sorting / filtering the queryset, then bit by bit does some processing on the objects my query has returned. So the processing is mainly conditional aggregation, that has complex enough rules it can't easily be done in the database, so I have some variables declared outside the main loop then get altered during the loop. varaible_1 = 0 variable_2 = 0 for object in queryset : if object.condition_condition_a and variable_2 > 0 : variable 1+= 1 ..... ... . more conditions to alter the variables return queryset, and context So according to the theory I should factor out all the code into smaller methods, so That I have the view method as being maximum one page long. However having worked on various code bases in the past, I sometimes find it makes the code less readable, when you need to constantly jump from one method to the next figuring out all the parts of it, while keeping the outermost method in your head. I find that having a long method that is well formatted, you can see the logic more easily, as it isn't getting hidden away in inner methods. I could factor out the code into smaller methods, but often there is is an inner loop being used for two or three things, so it would result in more complex code, or methods that don't do one thing but two or three (alternatively I could repeat inner loops for each task, but then there will be a performance hit). So is there a case that long methods are not always bad? Is there always a case for writing methods, when they will only be used in one place?

    Read the article

  • Is it okay to showcase templates/layouts recreated in different codes in a portfolio?

    - by Souta
    I have several different templates/layouts, both simple and complex. I recreated these templates multiple times, just using different codes. (Say, a complex one was originally made in only HTML and CSS, I recreated it using HTML, Javascript, CSS, then again with a HTML and PHP concoction, and etc.) I wanted to showcase my work and skills by doing this, but I don't know if it would be okay for that all to go into a resumé/portfolio. This is why: Freelancing Does potential business really care about how their site is made, as long as it looks and functions to their liking? (As in, should I just only show the one example of each template/layout and not the multiple recreations?) Potential Hire However, if a potential employer were to stumble across my resumé/portfolio, would having the multiple recreations do any good for a career outlook? (As in, this potential employer is a company where I could be working on a team to create/develop sites and not be freelancing; would a lack of skill-shining turn this employer away because I didn't set myself apart and show that I'm not just like every other budding web designer?) Those two issues have me wondering if it is okay to have a resumé/portfolio combined for this specific reason. Or does something like this not matter to potential business (as a freelancer) because they wouldn't care either way as long as it looks and functions to their liking and therefore it is okay to showcase the recreations with the originals?

    Read the article

  • No sound after upgrading to Ubuntu 11.10 from win7

    - by Tilman
    just as a prefix to my question, i'd like to note that i'm just now entering the world of Linux (unless you count my android, but that's a very different experience...) i have two computers now that run Ubuntu 11.10, the first of which i've had very little problems with, aside from figuring out the basics. the second, from which i'm writing this question, has (up to this point) only had one problem.... no sound. i've read a couple questions similiar and found little help as the component catalog doesn't have my computer listed. (in fact i'm not suprised this is a pos i had my mom grab from her work before they officially closed the doors behind them) had perfect sound before hand, and no sound now. sudo lspci -v brings up 00:1b.0 Audio device: Intel Corporation N10/ICH 7 Family High Definition Audio Controller (rev 01) Subsystem: Intel Corporation Device d608 Flags: bus master, fast devsel, latency 0, IRQ 45 Memory at ff980000 (64-bit, non-prefetchable) [size=16K] Capabilities: [50] Power Management version 2 Capabilities: [60] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00 Capabilities: [100] Virtual Channel Capabilities: [130] Root Complex Link Kernel driver in use: HDA Intel Kernel modules: snd-hda-intel any help would be greatly appreciated, me and my gf just wanna watch a damn movie lol

    Read the article

  • What is the value of checking in failing unit tests?

    - by Adam W.
    While there are ways of keeping unit tests from being executed, what is the value of checking in failing unit tests? I will use a simple example: Case Sensitivity. The current code is case sensitive. A valid input into the method is "Cat" and it would return an enum of Animal.Cat. However, the desired functionality of the method should not be case sensitive. So if the method described was passed "cat" it could possibly return something like Animal.Null instead of Animal.Cat and the unit test would fail. Though a simple code change would make this work, a more complex issue may take weeks to fix, but identifying the bug with a unit test could be a less complex task. The application currently being analyzed has 4 years of code that "works". However, recent discussions regarding unit tests have found flaws in the code. Some just need explicit implementation documentation (ex. case sensitive or not), or code that does not execute the bug based on how it is currently called. But unit tests can be created executing specific scenarios that will cause the bug to be seen and are valid inputs. What is the value of checking in unit tests that exercise the bug until someone can get around to fixing the code? Should this unit test be flagged with ignore, priority, category etc, to determine whether a build was successful based on tests executed? Eventually the unit test should be created to execute the code once someone fixes it. On one hand it shows that identified bugs have not been fixed. On the other, there could be hundreds of failed unit tests showing up in the logs and weeding through the ones that should fail vs. failures due to a code check-in would be difficult to find.

    Read the article

  • How to optimize calls to multiple APIs at once and return as one set?

    - by Martin
    I have a web app that searches across 2 APIs right now. I have my own Restful web service that I call, and it does all the work on the backend to asynchronously call the 2 APIs and concatenate them into one result set for my web app to use. I want to scale this out and add as many other APIs as I can (currently looking at about 10 more). But as I add APIs, the call to my service gets (potentially) slower and more complex. How do I handle one API not responding ... and other issues that arise? What would be the best way to approach this? Should I create a service call for each API, that way each one is independent and not coupled to all the other calls? Is there a way on the backend to handle the multiple API calls without all the extra complexity it adds? If I go the route of a service call per API, now my client code gets more complex (and I have a lot of clients)? And it's more work for the client, and since I have mobile apps, it will cost the client more data usage. If I go one service call, is there a way to set up some sort of connection so I can return data as I get it, in case one service call hangs?

    Read the article

  • implementing dynamic query handler on historical data

    - by user2390183
    EDIT : Refined question to focus on the core issue Context: I have historical data about property (house) sales collected from various sources in a centralized/cloud data source (assume info collection is handled by a third party) Planning to develop an application to query and retrieve data from this centralized data source Example Queries: Simple : for given XYZ post code, what is average house price for 3 bed room house? Complex: What is estimated price for an house at "DD,Some Street,XYZ Post Code" (worked out from average values of historic data filtered by various characteristics of the house: house post code, no of bed rooms, total area, and other deeper insights like house building type, year of built, features)? In addition to average price, the application should support other property info ** maximum, or minimum price..etc and trend (graph) on a selected property attribute over a period of time**. Hence, the queries should not enforce the search based on a primary key or few fixed fields In other words, queries can be What is the change in 3 Bed Room house price (irrespective of location) over last 30 days? What kind of properties we can get for X price (irrespective of location or house type) The challenge I have is identifying the domain (BI/ Data Analytical or DB Design or DB Query Interface or DW related or something else) this problem (dynamic query on historic data) belong to, so that I can do further exploration My findings so far I could be wrong on the following, so please correct me if you think so I briefly read about BI/Data Analytics - I think it is heavy weight solution for my problem and has scalability issues. DB Design - As I understand RDBMS works well if you know Data model at design time. I am expecting attributes about property or other entity (user) that am going to bring in, would evolve quickly. hence maintenance would be an issue. As I am going to have multiple users executing query at same time, performance would be a bottleneck Other options like Graph DB (http://www.tinkerpop.com/) seems to be bit complex (they are good. but using those tools meant for generic purpose, make me think like assembly programming to solve my problem ) BigData related solution are to analyse data from multiple unrelated domains So, Any suggestion on the space this problem fit in ? (Especially if you have design/implementation experience of back-end for property listing or similar portals)

    Read the article

  • Upgrading to 12.10 on an external hard drive

    - by Tom Childers
    I did some googling on this and didn't find anything specific for my situation. I currently have 12.04 installed on an external USB hard drive. It's working great. I want to upgrade it to 12.10. My bandwidth is very limited so I have a friend who will download 12.10 for me and put it on a flash stik. Then I can upgrade without having to do the download myself. Which particular version of the 12.10 download file(s) should I get? Are there alternate 12.10 downloads that have all the packages? How do I set it up so when I upgrade 12.04 I can specify that it look in some local repository for the 12.10 files? Can I just dump the 12.10 files in some local directory? Or do I have do go thru some complex commands to create a local repository? I'm pretty new to Linux so a long process of complex terminal commands will probably be a show stopper for me. Remember that my 12.04 install resides on an external hard drive. And I have a laptop with multiple USB ports. Thanks! Advait

    Read the article

  • How to unit test models in MVC / MVR app?

    - by BBnyc
    I'm building a node.js web app and am trying to do so for the first time in a test driven fashion. I'm using nodeunit for testing, which I find allows me to write tests quickly and painlessly. In this particular app, the heavy lifting primarily involves translating SQL data into complex Javascript object and serving them to the front-end via json. Likewise, the app also spends a great deal of code validating and translating complex, multidimensional Javascript objects it receives from the front-end into SQL rows. Hence I have used a fat model design for the app -- most of the real code resides in the models, where the data translation happens. What's the best approach to test such models with unit tests? I mean in particular the methods that have create javascript objects from the SQL rows and serve them to the front-end. Right now what I'm doing is making particular requests of my models with the unit tests and checking the returned data for all of the fields that should be there. However I have a suspicion that this is not the most robust kind of testing I could be doing. My current testing design also means I have to package my app code with some dummy data so that my tests can anticipate the kind of data that the app should be returning when tests run.

    Read the article

  • How do you plan your asynchronous code?

    - by NullOrEmpty
    I created a library that is a invoker for a web service somewhere else. The library exposes asynchronous methods, since web service calls are a good candidate for that matter. At the beginning everything was just fine, I had methods with easy to understand operations in a CRUD fashion, since the library is a kind of repository. But then business logic started to become complex, and some of the procedures involves the chaining of many of these asynchronous operations, sometimes with different paths depending on the result value, etc.. etc.. Suddenly, everything is very messy, to stop the execution in a break point it is not very helpful, to find out what is going on or where in the process timeline have you stopped become a pain... Development becomes less quick, less agile, and to catch those bugs that happens once in a 1000 times becomes a hell. From the technical point, a repository that exposes asynchronous methods looked like a good idea, because some persistence layers could have delays, and you can use the async approach to do the most of your hardware. But from the functional point of view, things became very complex, and considering those procedures where a dozen of different calls were needed... I don't know the real value of the improvement. After read about TPL for a while, it looked like a good idea for managing tasks, but in the moment you have to combine them and start to reuse existing functionality, things become very messy. I have had a good experience using it for very concrete scenarios, but bad experience using them broadly. How do you work asynchronously? Do you use it always? Or just for long running processes? Thanks.

    Read the article

  • OpenXML error “file is corrupt and cannot be opened.”

    - by nmgomes
    From time to time I ear some people saying their new web application supports data export to Excel format. So far so good … but they don’t tell the all story … in fact almost all the times what is happening is they are exporting data to a Comma-Separated file or simply exporting GridView rendered HTML to an xls file. Ok … it works but it’s not something I would be proud of. So … yesterday I decided to take a look at the Office Open XML File Formats Specification (Microsoft Office 2007+ format) based on well-known technologies: ZIP and XML. I start by installing Open XML SDK 2.0 for Microsoft Office and playing with some samples. Then I decided to try it on a more complex web application and the “file is corrupt and cannot be opened.” message start happening. Google show us that many people suffer from the same and it seems there are many reasons that can trigger this message. Some are related to the process itself, others with encodings or even styling. Well, none solved my problem and I had to dig … well not that much, I simply change the output file extension to zip and extract the zip content. Then I did the same to the output file from my first sample, compare both zip contents with SourceGear DiffMerge and found that my problem was Culture related. Yes, my complex application sets the Thread.CurrentThread.CurrentCulture  to a non-English culture. For sample purposes I was simply using the ToString method to convert numbers and dates to a string representation but forgot that XML is culture invariant and thus using a decimal separator other than “.” will result in a deserialization problem. I solve the “file is corrupt and cannot be opened.” by using Convert.ToString(object, CultureInfo.InvariantCulture) method instead of the ToString method. Hope this can help someone.

    Read the article

  • What is the value of checking in failing unit tests?

    - by user20194
    While there are ways of keeping unit tests from being executed, what is the value of checking in failing unit tests? I will use a simple example: Case Sensitivity. The current code is case sensitive. A valid input into the method is "Cat" and it would return an enum of Animal.Cat. However, the desired functionality of the method should not be case sensitive. So if the method described was passed "cat" it could possibly return something like Animal.Null instead of Animal.Cat and the unit test would fail. Though a simple code change would make this work, a more complex issue may take weeks to fix, but identifying the bug with a unit test could be a less complex task. The application currently being analyzed has 4 years of code that "works". However, recent discussions regarding unit tests have found flaws in the code. Some just need explicit implementation documentation (ex. case sensitive or not), or code that does not execute the bug based on how it is currently called. But unit tests can be created executing specific scenarios that will cause the bug to be seen and are valid inputs. What is the value of checking in unit tests that exercise the bug until someone can get around to fixing the code? Should this unit test be flagged with ignore, priority, category etc, to determine whether a build was successful based on tests executed? Eventually the unit test should be created to execute the code once someone fixes it. On one hand it shows that identified bugs have not been fixed. On the other, there could be hundreds of failed unit tests showing up in the logs and weeding through the ones that should fail vs. failures due to a code check-in would be difficult to find.

    Read the article

  • Merging similar graphs based solely on the graph structure?

    - by Buttons840
    I am looking for (or attempting to design) a technique for matching nodes from very similar graphs based on the structure of the graph*. In the examples below, the top graph has 5 nodes, and the bottom graph has 6 nodes. I would like to match the nodes from the top graph to the nodes in the bottom graph, such that the "0" nodes match, and the "1" nodes match, etc. This seems logically possible, because I can do it in my head for these simple examples. Now I just need to express my intuition in code. Are there any established algorithms or patterns I might consider? (* When I say based on the structure of the graph, I mean the solution shouldn't depend on the node labels; the numeric labels on the nodes are only for demonstration.) I'm also interested in the performance of any potential solutions. How well will they scale? Could I merge graphs with millions of nodes? In more complex cases, I recognize that the best solution may be subject to interpretation. Still, I'm hoping for a "good" way to merge complex graphs. (These are directed graphs; the thicker portion of an edge represents the head.)

    Read the article

  • Architecture for dashboard showing aggregated stats [on hold]

    - by soulnafein
    I'd like to know what are common architectural pattern for the following problem. Web application A has information on sales, users, responsiveness score, etc. Some of this information are computationally intensive and or have a complex business logic (e.g. responsiveness score). I'm building a separate application (B) for internal admin tasks that modifies data in web application A and report on data from web application A. For writing I'm planning to use a restful api. E.g. create a new entity, update entity, etc. In application B I'd like to show some graphs and other aggregate data for the previous 12 months. I'm planning to store the aggregate data for each month in redis. Some data should update more often, e.g every 10 minutes. I can think of 3 ways of doing this. A scheduled task in app B that connects to an api of app A that provides some aggregated data. Then app B stores it in Redis and use that to visualise pages. Cons: it makes complex calculation within a web request, requires lot's of work e.g. api server and client, storing, etc., pros: business logic still lives in app A. A scheduled task in app A that aggregates data in an non-web process and stores it directly in Redis to be accessed by app B. A scheduled task in app A that aggregates data in a non-web process and uses an api in app B to save it. I'd like to know if there is a well known architectural solution to this type of problems and if not what are other pros/cons for the solution I've suggested?

    Read the article

  • How to correct a junior, but encourage him to think for himself? [closed]

    - by Phil
    I am the lead of a small team where everyone has less than a year of software development experience. I wouldn't by any means call myself a software guru, but I have learned a few things in the few years that I've been writing software. When we do code reviews I do a fair bit of teaching and correcting mistakes. I will say things like "This is overly complex and convoluted, and here's why," or "What do you think about moving this method into a separate class?" I am extra careful to communicate that if they have questions or dissenting opinions, that's ok and we need to discuss. Every time I correct someone, I ask "What do you think?" or something similar. However they rarely if ever disagree or ask why. And lately I've been noticing more blatant signs that they are blindly agreeing with my statements and not forming opinions of their own. I need a team who can learn to do things right autonomously, not just follow instructions. How does one correct a junior developer, but still encourage him to think for himself? Edit: Here's an example of one of these obvious signs that they're not forming their own opinions: Me: I like your idea of creating an extension method, but I don't like how you passed a large complex lambda as a parameter. The lambda forces others to know too much about the method's implementation. Junior (after misunderstanding me): Yes, I totally agree. We should not use extension methods here because they force other developers to know too much about the implementation. There was a misunderstanding, and that has been dealt with. But there was not even an OUNCE of logic in his statement! He thought he was regurgitating my logic back to me, thinking it would make sense when really he had no clue why he was saying it.

    Read the article

  • Netbeans Error "build-impl.xml:688" : The module has not been deployed.

    - by Sarang
    Hi everyone, I am getting this error while deploying the jsp project : In-place deployment at C:\Users\Admin\Documents\NetBeansProjects\send-mail\build\web Initializing... deploy?path=C:\Users\Admin\Documents\NetBeansProjects\send-mail\build\web&name=send-mail&force=true failed on GlassFish Server 3 C:\Users\Admin\Documents\NetBeansProjects\send-mail\nbproject\build-impl.xml:688: The module has not been deployed. BUILD FAILED (total time: 0 seconds) Is there any solution for this ? Stack Trace : SEVERE: DPL8015: Invalid Deployment Descriptors in Deployment descriptor file WEB-INF/web.xml in archive [web]. Line 19 Column 23 -- cvc-complex-type.2.4.a: Invalid content was found starting with element 'display-name'. One of '{"http://java.sun.com/xml/ns/javaee":servlet-class, "http://java.sun.com/xml/ns/javaee":jsp-file, "http://java.sun.com/xml/ns/javaee":init-param, "http://java.sun.com/xml/ns/javaee":load-on-startup, "http://java.sun.com/xml/ns/javaee":enabled, "http://java.sun.com/xml/ns/javaee":async-supported, "http://java.sun.com/xml/ns/javaee":run-as, "http://java.sun.com/xml/ns/javaee":security-role-ref, "http://java.sun.com/xml/ns/javaee":multipart-config}' is expected. SEVERE: DPL8005: Deployment Descriptor parsing failure : cvc-complex-type.2.4.a: Invalid content was found starting with element 'display-name'. One of '{"http://java.sun.com/xml/ns/javaee":servlet-class, "http://java.sun.com/xml/ns/javaee":jsp-file, "http://java.sun.com/xml/ns/javaee":init-param, "http://java.sun.com/xml/ns/javaee":load-on-startup, "http://java.sun.com/xml/ns/javaee":enabled, "http://java.sun.com/xml/ns/javaee":async-supported, "http://java.sun.com/xml/ns/javaee":run-as, "http://java.sun.com/xml/ns/javaee":security-role-ref, "http://java.sun.com/xml/ns/javaee":multipart-config}' is expected. SEVERE: Exception while deploying the app java.io.IOException: org.xml.sax.SAXParseException: cvc-complex-type.2.4.a: Invalid content was found starting with element 'display-name'. One of '{"http://java.sun.com/xml/ns/javaee":servlet-class, "http://java.sun.com/xml/ns/javaee":jsp-file, "http://java.sun.com/xml/ns/javaee":init-param, "http://java.sun.com/xml/ns/javaee":load-on-startup, "http://java.sun.com/xml/ns/javaee":enabled, "http://java.sun.com/xml/ns/javaee":async-supported, "http://java.sun.com/xml/ns/javaee":run-as, "http://java.sun.com/xml/ns/javaee":security-role-ref, "http://java.sun.com/xml/ns/javaee":multipart-config}' is expected. at org.glassfish.javaee.core.deployment.DolProvider.load(DolProvider.java:170) at org.glassfish.javaee.core.deployment.DolProvider.load(DolProvider.java:79) at com.sun.enterprise.v3.server.ApplicationLifecycle.loadDeployer(ApplicationLifecycle.java:612) at com.sun.enterprise.v3.server.ApplicationLifecycle.setupContainerInfos(ApplicationLifecycle.java:554) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:262) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:183) at org.glassfish.deployment.admin.DeployCommand.execute(DeployCommand.java:272) at com.sun.enterprise.v3.admin.CommandRunnerImpl$1.execute(CommandRunnerImpl.java:305) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:320) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:1176) at com.sun.enterprise.v3.admin.CommandRunnerImpl.access$900(CommandRunnerImpl.java:83) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1235) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1224) at com.sun.enterprise.v3.admin.AdminAdapter.doCommand(AdminAdapter.java:365) at com.sun.enterprise.v3.admin.AdminAdapter.service(AdminAdapter.java:204) at com.sun.grizzly.tcp.http11.GrizzlyAdapter.service(GrizzlyAdapter.java:166) at com.sun.enterprise.v3.server.HK2Dispatcher.dispath(HK2Dispatcher.java:100) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:245) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:791) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:693) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:954) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:170) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:135) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:102) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:88) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:76) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:53) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:57) at com.sun.grizzly.ContextTask.run(ContextTask.java:69) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:330) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:309) at java.lang.Thread.run(Thread.java:662) Caused by: org.xml.sax.SAXParseException: cvc-complex-type.2.4.a: Invalid content was found starting with element 'display-name'. One of '{"http://java.sun.com/xml/ns/javaee":servlet-class, "http://java.sun.com/xml/ns/javaee":jsp-file, "http://java.sun.com/xml/ns/javaee":init-param, "http://java.sun.com/xml/ns/javaee":load-on-startup, "http://java.sun.com/xml/ns/javaee":enabled, "http://java.sun.com/xml/ns/javaee":async-supported, "http://java.sun.com/xml/ns/javaee":run-as, "http://java.sun.com/xml/ns/javaee":security-role-ref, "http://java.sun.com/xml/ns/javaee":multipart-config}' is expected. at com.sun.enterprise.deployment.io.DeploymentDescriptorFile.read(DeploymentDescriptorFile.java:304) at com.sun.enterprise.deployment.io.DeploymentDescriptorFile.read(DeploymentDescriptorFile.java:225) at com.sun.enterprise.deployment.archivist.Archivist.readStandardDeploymentDescriptor(Archivist.java:614) at com.sun.enterprise.deployment.archivist.Archivist.readDeploymentDescriptors(Archivist.java:366) at com.sun.enterprise.deployment.archivist.Archivist.open(Archivist.java:238) at com.sun.enterprise.deployment.archivist.Archivist.open(Archivist.java:247) at com.sun.enterprise.deployment.archivist.Archivist.open(Archivist.java:208) at com.sun.enterprise.deployment.archivist.ApplicationFactory.openArchive(ApplicationFactory.java:148) at org.glassfish.javaee.core.deployment.DolProvider.load(DolProvider.java:162) ... 31 more Caused by: org.xml.sax.SAXParseException: cvc-complex-type.2.4.a: Invalid content was found starting with element 'display-name'. One of '{"http://java.sun.com/xml/ns/javaee":servlet-class, "http://java.sun.com/xml/ns/javaee":jsp-file, "http://java.sun.com/xml/ns/javaee":init-param, "http://java.sun.com/xml/ns/javaee":load-on-startup, "http://java.sun.com/xml/ns/javaee":enabled, "http://java.sun.com/xml/ns/javaee":async-supported, "http://java.sun.com/xml/ns/javaee":run-as, "http://java.sun.com/xml/ns/javaee":security-role-ref, "http://java.sun.com/xml/ns/javaee":multipart-config}' is expected. at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:195) at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.error(ErrorHandlerWrapper.java:131) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:384) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:318) at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator$XSIErrorReporter.reportError(XMLSchemaValidator.java:417) at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator.reportSchemaError(XMLSchemaValidator.java:3182) at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator.handleStartElement(XMLSchemaValidator.java:1806) at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator.startElement(XMLSchemaValidator.java:705) at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.scanStartElement(XMLNSDocumentScannerImpl.java:400) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2755) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:648) at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:140) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:511) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:808) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:119) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1205) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:522) at javax.xml.parsers.SAXParser.parse(SAXParser.java:395) at com.sun.enterprise.deployment.io.DeploymentDescriptorFile.read(DeploymentDescriptorFile.java:298) ... 39 more Any Solution for this ?

    Read the article

  • spring mvc, css and javascript is not working properly

    - by user2788424
    the css and javascript is not take effect on my page. I google online, people saying this is the magic, but not happening on my page. <mvc:resources mapping="/resources/**" location="/resources/" /> this is the error: Nov 02, 2013 9:19:29 PM org.springframework.web.servlet.DispatcherServlet noHandlerFound WARNING: No mapping found for HTTP request with URI [/myweb/resources/css/styles.css] in DispatcherServlet with name 'dispatcher' Nov 02, 2013 9:19:29 PM org.springframework.web.servlet.DispatcherServlet noHandlerFound WARNING: No mapping found for HTTP request with URI [/myweb/resources/script.js] in DispatcherServlet with name 'dispatcher' Nov 02, 2013 9:19:29 PM org.springframework.web.servlet.DispatcherServlet noHandlerFound WARNING: No mapping found for HTTP request with URI [/myweb/resources/js/jquery-1.10.2.min.js] in DispatcherServlet with name 'dispatcher' here is the applicationContext.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:mvc="http://www.springframework.org/schema/mvc" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xmlns:tx="http://www.springframework.org/schema/tx" xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.2.xsd http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc-3.2.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.2.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-3.2.xsd"> <context:component-scan base-package="org.peterhuang.myweb" /> <mvc:resources mapping="/resources/**" location="/resources/" /> <bean class="org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter"> </bean> <bean class="org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping"> </bean> <!-- Hibernate Transaction Manager --> <bean id="transactionManager" class="org.springframework.orm.hibernate4.HibernateTransactionManager"> <property name="sessionFactory" ref="sessionFactory" /> </bean> <mvc:annotation-driven /> <!-- Activates annotation based transaction management --> <tx:annotation-driven /> <bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"> <property name="location" value="classpath:jdbc.properties" /> </bean> <bean id="viewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="viewClass" value="org.springframework.web.servlet.view.JstlView" /> <property name="prefix" value="/WEB-INF/"></property> <property name="suffix" value=".jsp"></property> </bean> <bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource"> <property name="driverClassName" value="${jdbc.driverClassName}" /> <property name="url" value="${jdbc.url}" /> <property name="username" value="${jdbc.username}" /> <property name="password" value="${jdbc.password}" /> </bean> <bean id="sessionFactory" class="org.springframework.orm.hibernate4.LocalSessionFactoryBean"> <property name="dataSource" ref="dataSource" /> <property name="packagesToScan" value="org.peterhuang.myweb" /> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect"> ${jdbc.dialect} </prop> <prop key="hibernate.show_sql"> ${hibernate.show_sql} </prop> <prop key="hibernate.format_sql"> ${hibernate.format_sql} </prop> </props> </property> </bean> here is the web.xml <?xml version="1.0" encoding="UTF-8"?> <web-app id="WebApp_ID" version="2.4" xmlns="http://java.sun.com/xml/ns/j2ee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd"> <display-name>my web</display-name> <servlet> <servlet-name>dispatcher</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <init-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/applicationContext.xml</param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>dispatcher</servlet-name> <url-pattern>/</url-pattern> </servlet-mapping> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> <welcome-file-list> <welcome-file>/WEB-INF/jsp/welcome.jsp</welcome-file> </welcome-file-list> this is the page got displaied: <%@taglib uri="http://java.sun.com/jsp/jstl/core" prefix="c"%> <%@ taglib uri="http://www.springframework.org/tags" prefix="spring"%> <link type="text/css" rel="stylesheet" href="<spring:url value='resources/css/styles.css' />" /> <script type="text/javascript" src="<spring:url value='resources/js/jquery-1.10.2.min.js' />"></script> <script type="text/javascript" src="<spring:url value='resources/script.js'/>"</script> <ul id="button"> <c:forEach var="category" items="${categoryList}"> <li><a href="#">${category.categoryName}</a></li> </c:forEach> </ul> the folder structure in eclipse: myweb | | | |----Java Resources | | | | | |-----src/main/resources | | | | | | | | |------js | | | | | | | |-----jquery-1.10.2.min.js | | | | | | | | | | | |-----script.js | | | | | | | | |-----css | | | | | | | |-----style.css | | | | | | | | any tips would be appreciated!! thanks in advanced!

    Read the article

  • An Xml Serializable PropertyBag Dictionary Class for .NET

    - by Rick Strahl
    I don't know about you but I frequently need property bags in my applications to store and possibly cache arbitrary data. Dictionary<T,V> works well for this although I always seem to be hunting for a more specific generic type that provides a string key based dictionary. There's string dictionary, but it only works with strings. There's Hashset<T> but it uses the actual values as keys. In most key value pair situations for me string is key value to work off. Dictionary<T,V> works well enough, but there are some issues with serialization of dictionaries in .NET. The .NET framework doesn't do well serializing IDictionary objects out of the box. The XmlSerializer doesn't support serialization of IDictionary via it's default serialization, and while the DataContractSerializer does support IDictionary serialization it produces some pretty atrocious XML. What doesn't work? First off Dictionary serialization with the Xml Serializer doesn't work so the following fails: [TestMethod] public void DictionaryXmlSerializerTest() { var bag = new Dictionary<string, object>(); bag.Add("key", "Value"); bag.Add("Key2", 100.10M); bag.Add("Key3", Guid.NewGuid()); bag.Add("Key4", DateTime.Now); bag.Add("Key5", true); bag.Add("Key7", new byte[3] { 42, 45, 66 }); TestContext.WriteLine(this.ToXml(bag)); } public string ToXml(object obj) { if (obj == null) return null; StringWriter sw = new StringWriter(); XmlSerializer ser = new XmlSerializer(obj.GetType()); ser.Serialize(sw, obj); return sw.ToString(); } The error you get with this is: System.NotSupportedException: The type System.Collections.Generic.Dictionary`2[[System.String, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089],[System.Object, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]] is not supported because it implements IDictionary. Got it! BTW, the same is true with binary serialization. Running the same code above against the DataContractSerializer does work: [TestMethod] public void DictionaryDataContextSerializerTest() { var bag = new Dictionary<string, object>(); bag.Add("key", "Value"); bag.Add("Key2", 100.10M); bag.Add("Key3", Guid.NewGuid()); bag.Add("Key4", DateTime.Now); bag.Add("Key5", true); bag.Add("Key7", new byte[3] { 42, 45, 66 }); TestContext.WriteLine(this.ToXmlDcs(bag)); } public string ToXmlDcs(object value, bool throwExceptions = false) { var ser = new DataContractSerializer(value.GetType(), null, int.MaxValue, true, false, null); MemoryStream ms = new MemoryStream(); ser.WriteObject(ms, value); return Encoding.UTF8.GetString(ms.ToArray(), 0, (int)ms.Length); } This DOES work but produces some pretty heinous XML (formatted with line breaks and indentation here): <ArrayOfKeyValueOfstringanyType xmlns="http://schemas.microsoft.com/2003/10/Serialization/Arrays" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <KeyValueOfstringanyType> <Key>key</Key> <Value i:type="a:string" xmlns:a="http://www.w3.org/2001/XMLSchema">Value</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key2</Key> <Value i:type="a:decimal" xmlns:a="http://www.w3.org/2001/XMLSchema">100.10</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key3</Key> <Value i:type="a:guid" xmlns:a="http://schemas.microsoft.com/2003/10/Serialization/">2cd46d2a-a636-4af4-979b-e834d39b6d37</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key4</Key> <Value i:type="a:dateTime" xmlns:a="http://www.w3.org/2001/XMLSchema">2011-09-19T17:17:05.4406999-07:00</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key5</Key> <Value i:type="a:boolean" xmlns:a="http://www.w3.org/2001/XMLSchema">true</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key7</Key> <Value i:type="a:base64Binary" xmlns:a="http://www.w3.org/2001/XMLSchema">Ki1C</Value> </KeyValueOfstringanyType> </ArrayOfKeyValueOfstringanyType> Ouch! That seriously hurts the eye! :-) Worse though it's extremely verbose with all those repetitive namespace declarations. It's good to know that it works in a pinch, but for a human readable/editable solution or something lightweight to store in a database it's not quite ideal. Why should I care? As a little background, in one of my applications I have a need for a flexible property bag that is used on a free form database field on an otherwise static entity. Basically what I have is a standard database record to which arbitrary properties can be added in an XML based string field. I intend to expose those arbitrary properties as a collection from field data stored in XML. The concept is pretty simple: When loading write the data to the collection, when the data is saved serialize the data into an XML string and store it into the database. When reading the data pick up the XML and if the collection on the entity is accessed automatically deserialize the XML into the Dictionary. (I'll talk more about this in another post). While the DataContext Serializer would work, it's verbosity is problematic both for size of the generated XML strings and the fact that users can manually edit this XML based property data in an advanced mode. A clean(er) layout certainly would be preferable and more user friendly. Custom XMLSerialization with a PropertyBag Class So… after a bunch of experimentation with different serialization formats I decided to create a custom PropertyBag class that provides for a serializable Dictionary. It's basically a custom Dictionary<TType,TValue> implementation with the keys always set as string keys. The result are PropertyBag<TValue> and PropertyBag (which defaults to the object type for values). The PropertyBag<TType> and PropertyBag classes provide these features: Subclassed from Dictionary<T,V> Implements IXmlSerializable with a cleanish XML format ToXml() and FromXml() methods to export and import to and from XML strings Static CreateFromXml() method to create an instance It's simple enough as it's merely a Dictionary<string,object> subclass but that supports serialization to a - what I think at least - cleaner XML format. The class is super simple to use: [TestMethod] public void PropertyBagTwoWayObjectSerializationTest() { var bag = new PropertyBag(); bag.Add("key", "Value"); bag.Add("Key2", 100.10M); bag.Add("Key3", Guid.NewGuid()); bag.Add("Key4", DateTime.Now); bag.Add("Key5", true); bag.Add("Key7", new byte[3] { 42,45,66 } ); bag.Add("Key8", null); bag.Add("Key9", new ComplexObject() { Name = "Rick", Entered = DateTime.Now, Count = 10 }); string xml = bag.ToXml(); TestContext.WriteLine(bag.ToXml()); bag.Clear(); bag.FromXml(xml); Assert.IsTrue(bag["key"] as string == "Value"); Assert.IsInstanceOfType( bag["Key3"], typeof(Guid)); Assert.IsNull(bag["Key8"]); //Assert.IsNull(bag["Key10"]); Assert.IsInstanceOfType(bag["Key9"], typeof(ComplexObject)); } This uses the PropertyBag class which uses a PropertyBag<string,object> - which means it returns untyped values of type object. I suspect for me this will be the most common scenario as I'd want to store arbitrary values in the PropertyBag rather than one specific type. The same code with a strongly typed PropertyBag<decimal> looks like this: [TestMethod] public void PropertyBagTwoWayValueTypeSerializationTest() { var bag = new PropertyBag<decimal>(); bag.Add("key", 10M); bag.Add("Key1", 100.10M); bag.Add("Key2", 200.10M); bag.Add("Key3", 300.10M); string xml = bag.ToXml(); TestContext.WriteLine(bag.ToXml()); bag.Clear(); bag.FromXml(xml); Assert.IsTrue(bag.Get("Key1") == 100.10M); Assert.IsTrue(bag.Get("Key3") == 300.10M); } and produces typed results of type decimal. The types can be either value or reference types the combination of which actually proved to be a little more tricky than anticipated due to null and specific string value checks required - getting the generic typing right required use of default(T) and Convert.ChangeType() to trick the compiler into playing nice. Of course the whole raison d'etre for this class is the XML serialization. You can see in the code above that we're doing a .ToXml() and .FromXml() to serialize to and from string. The XML produced for the first example looks like this: <?xml version="1.0" encoding="utf-8"?> <properties> <item> <key>key</key> <value>Value</value> </item> <item> <key>Key2</key> <value type="decimal">100.10</value> </item> <item> <key>Key3</key> <value type="___System.Guid"> <guid>f7a92032-0c6d-4e9d-9950-b15ff7cd207d</guid> </value> </item> <item> <key>Key4</key> <value type="datetime">2011-09-26T17:45:58.5789578-10:00</value> </item> <item> <key>Key5</key> <value type="boolean">true</value> </item> <item> <key>Key7</key> <value type="base64Binary">Ki1C</value> </item> <item> <key>Key8</key> <value type="nil" /> </item> <item> <key>Key9</key> <value type="___Westwind.Tools.Tests.PropertyBagTest+ComplexObject"> <ComplexObject> <Name>Rick</Name> <Entered>2011-09-26T17:45:58.5789578-10:00</Entered> <Count>10</Count> </ComplexObject> </value> </item> </properties>   The format is a bit cleaner than the DataContractSerializer. Each item is serialized into <key> <value> pairs. If the value is a string no type information is written. Since string tends to be the most common type this saves space and serialization processing. All other types are attributed. Simple types are mapped to XML types so things like decimal, datetime, boolean and base64Binary are encoded using their Xml type values. All other types are embedded with a hokey format that describes the .NET type preceded by a three underscores and then are encoded using the XmlSerializer. You can see this best above in the ComplexObject encoding. For custom types this isn't pretty either, but it's more concise than the DCS and it works as long as you're serializing back and forth between .NET clients at least. The XML generated from the second example that uses PropertyBag<decimal> looks like this: <?xml version="1.0" encoding="utf-8"?> <properties> <item> <key>key</key> <value type="decimal">10</value> </item> <item> <key>Key1</key> <value type="decimal">100.10</value> </item> <item> <key>Key2</key> <value type="decimal">200.10</value> </item> <item> <key>Key3</key> <value type="decimal">300.10</value> </item> </properties>   How does it work As I mentioned there's nothing fancy about this solution - it's little more than a subclass of Dictionary<T,V> that implements custom Xml Serialization and a couple of helper methods that facilitate getting the XML in and out of the class more easily. But it's proven very handy for a number of projects for me where dynamic data storage is required. Here's the code: /// <summary> /// Creates a serializable string/object dictionary that is XML serializable /// Encodes keys as element names and values as simple values with a type /// attribute that contains an XML type name. Complex names encode the type /// name with type='___namespace.classname' format followed by a standard xml /// serialized format. The latter serialization can be slow so it's not recommended /// to pass complex types if performance is critical. /// </summary> [XmlRoot("properties")] public class PropertyBag : PropertyBag<object> { /// <summary> /// Creates an instance of a propertybag from an Xml string /// </summary> /// <param name="xml">Serialize</param> /// <returns></returns> public static PropertyBag CreateFromXml(string xml) { var bag = new PropertyBag(); bag.FromXml(xml); return bag; } } /// <summary> /// Creates a serializable string for generic types that is XML serializable. /// /// Encodes keys as element names and values as simple values with a type /// attribute that contains an XML type name. Complex names encode the type /// name with type='___namespace.classname' format followed by a standard xml /// serialized format. The latter serialization can be slow so it's not recommended /// to pass complex types if performance is critical. /// </summary> /// <typeparam name="TValue">Must be a reference type. For value types use type object</typeparam> [XmlRoot("properties")] public class PropertyBag<TValue> : Dictionary<string, TValue>, IXmlSerializable { /// <summary> /// Not implemented - this means no schema information is passed /// so this won't work with ASMX/WCF services. /// </summary> /// <returns></returns> public System.Xml.Schema.XmlSchema GetSchema() { return null; } /// <summary> /// Serializes the dictionary to XML. Keys are /// serialized to element names and values as /// element values. An xml type attribute is embedded /// for each serialized element - a .NET type /// element is embedded for each complex type and /// prefixed with three underscores. /// </summary> /// <param name="writer"></param> public void WriteXml(System.Xml.XmlWriter writer) { foreach (string key in this.Keys) { TValue value = this[key]; Type type = null; if (value != null) type = value.GetType(); writer.WriteStartElement("item"); writer.WriteStartElement("key"); writer.WriteString(key as string); writer.WriteEndElement(); writer.WriteStartElement("value"); string xmlType = XmlUtils.MapTypeToXmlType(type); bool isCustom = false; // Type information attribute if not string if (value == null) { writer.WriteAttributeString("type", "nil"); } else if (!string.IsNullOrEmpty(xmlType)) { if (xmlType != "string") { writer.WriteStartAttribute("type"); writer.WriteString(xmlType); writer.WriteEndAttribute(); } } else { isCustom = true; xmlType = "___" + value.GetType().FullName; writer.WriteStartAttribute("type"); writer.WriteString(xmlType); writer.WriteEndAttribute(); } // Actual deserialization if (!isCustom) { if (value != null) writer.WriteValue(value); } else { XmlSerializer ser = new XmlSerializer(value.GetType()); ser.Serialize(writer, value); } writer.WriteEndElement(); // value writer.WriteEndElement(); // item } } /// <summary> /// Reads the custom serialized format /// </summary> /// <param name="reader"></param> public void ReadXml(System.Xml.XmlReader reader) { this.Clear(); while (reader.Read()) { if (reader.NodeType == XmlNodeType.Element && reader.Name == "key") { string xmlType = null; string name = reader.ReadElementContentAsString(); // item element reader.ReadToNextSibling("value"); if (reader.MoveToNextAttribute()) xmlType = reader.Value; reader.MoveToContent(); TValue value; if (xmlType == "nil") value = default(TValue); // null else if (string.IsNullOrEmpty(xmlType)) { // value is a string or object and we can assign TValue to value string strval = reader.ReadElementContentAsString(); value = (TValue) Convert.ChangeType(strval, typeof(TValue)); } else if (xmlType.StartsWith("___")) { while (reader.Read() && reader.NodeType != XmlNodeType.Element) { } Type type = ReflectionUtils.GetTypeFromName(xmlType.Substring(3)); //value = reader.ReadElementContentAs(type,null); XmlSerializer ser = new XmlSerializer(type); value = (TValue)ser.Deserialize(reader); } else value = (TValue)reader.ReadElementContentAs(XmlUtils.MapXmlTypeToType(xmlType), null); this.Add(name, value); } } } /// <summary> /// Serializes this dictionary to an XML string /// </summary> /// <returns>XML String or Null if it fails</returns> public string ToXml() { string xml = null; SerializationUtils.SerializeObject(this, out xml); return xml; } /// <summary> /// Deserializes from an XML string /// </summary> /// <param name="xml"></param> /// <returns>true or false</returns> public bool FromXml(string xml) { this.Clear(); // if xml string is empty we return an empty dictionary if (string.IsNullOrEmpty(xml)) return true; var result = SerializationUtils.DeSerializeObject(xml, this.GetType()) as PropertyBag<TValue>; if (result != null) { foreach (var item in result) { this.Add(item.Key, item.Value); } } else // null is a failure return false; return true; } /// <summary> /// Creates an instance of a propertybag from an Xml string /// </summary> /// <param name="xml"></param> /// <returns></returns> public static PropertyBag<TValue> CreateFromXml(string xml) { var bag = new PropertyBag<TValue>(); bag.FromXml(xml); return bag; } } } The code uses a couple of small helper classes SerializationUtils and XmlUtils for mapping Xml types to and from .NET, both of which are from the WestWind,Utilities project (which is the same project where PropertyBag lives) from the West Wind Web Toolkit. The code implements ReadXml and WriteXml for the IXmlSerializable implementation using old school XmlReaders and XmlWriters (because it's pretty simple stuff - no need for XLinq here). Then there are two helper methods .ToXml() and .FromXml() that basically allow your code to easily convert between XML and a PropertyBag object. In my code that's what I use to actually to persist to and from the entity XML property during .Load() and .Save() operations. It's sweet to be able to have a string key dictionary and then be able to turn around with 1 line of code to persist the whole thing to XML and back. Hopefully some of you will find this class as useful as I've found it. It's a simple solution to a common requirement in my applications and I've used the hell out of it in the  short time since I created it. Resources You can find the complete code for the two classes plus the helpers in the Subversion repository for Westwind.Utilities. You can grab the source files from there or download the whole project. You can also grab the full Westwind.Utilities assembly from NuGet and add it to your project if that's easier for you. PropertyBag Source Code SerializationUtils and XmlUtils Westwind.Utilities Assembly on NuGet (add from Visual Studio) © Rick Strahl, West Wind Technologies, 2005-2011Posted in .NET  CSharp   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

< Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >