Search Results

Search found 9254 results on 371 pages for 'approach'.

Page 94/371 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • Render Ruby object to interactive html

    - by AvImd
    I am developing a tool that discovers network services enabled on host and writes short summary on them like this: init,1 +-- login,1560 -- +-- bash,1629 +-- nc,12137 -lup 50505 { :net = [ [0] "*:50505 IPv4 UDP " ], :fds = [ [0] "/root (cwd)", [1] "/", [2] "/bin/nc.traditional", [3] "/xochikit/ld_poison.so (stat: No such file or directory)", [4] "/dev/tty2", [5] "*:50505" ] } It proved to be very nice formatted and useful for quick discovery thanks to colors provided by the awesome_print gem. However, its output is just a text. One issue is that if I want to share it, I lose colors. I'd also like to fold and unfold parts of objects, quickly jump to specific processes and what not? Adding comments, for example. Thus I want something web-based. What is the best approach to implement features like these? I haven't worked with web interfaces before and I don't have much experience with Ruby.

    Read the article

  • Informed TDD &ndash; Kata &ldquo;To Roman Numerals&rdquo;

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/28/informed-tdd-ndash-kata-ldquoto-roman-numeralsrdquo.aspxIn a comment on my article on what I call Informed TDD (ITDD) reader gustav asked how this approach would apply to the kata “To Roman Numerals”. And whether ITDD wasn´t a violation of TDD´s principle of leaving out “advanced topics like mocks”. I like to respond with this article to his questions. There´s more to say than fits into a commentary. Mocks and TDD I don´t see in how far TDD is avoiding or opposed to mocks. TDD and mocks are orthogonal. TDD is about pocess, mocks are about structure and costs. Maybe by moving forward in tiny red+green+refactor steps less need arises for mocks. But then… if the functionality you need to implement requires “expensive” resource access you can´t avoid using mocks. Because you don´t want to constantly run all your tests against the real resource. True, in ITDD mocks seem to be in almost inflationary use. That´s not what you usually see in TDD demonstrations. However, there´s a reason for that as I tried to explain. I don´t use mocks as proxies for “expensive” resource. Rather they are stand-ins for functionality not yet implemented. They allow me to get a test green on a high level of abstraction. That way I can move forward in a top-down fashion. But if you think of mocks as “advanced” or if you don´t want to use a tool like JustMock, then you don´t need to use mocks. You just need to stand the sight of red tests for a little longer ;-) Let me show you what I mean by that by doing a kata. ITDD for “To Roman Numerals” gustav asked for the kata “To Roman Numerals”. I won´t explain the requirements again. You can find descriptions and TDD demonstrations all over the internet, like this one from Corey Haines. Now here is, how I would do this kata differently. 1. Analyse A demonstration of TDD should never skip the analysis phase. It should be made explicit. The requirements should be formalized and acceptance test cases should be compiled. “Formalization” in this case to me means describing the API of the required functionality. “[D]esign a program to work with Roman numerals” like written in this “requirement document” is not enough to start software development. Coding should only begin, if the interface between the “system under development” and its context is clear. If this interface is not readily recognizable from the requirements, it has to be developed first. Exploration of interface alternatives might be in order. It might be necessary to show several interface mock-ups to the customer – even if that´s you fellow developer. Designing the interface is a task of it´s own. It should not be mixed with implementing the required functionality behind the interface. Unfortunately, though, this happens quite often in TDD demonstrations. TDD is used to explore the API and implement it at the same time. To me that´s a violation of the Single Responsibility Principle (SRP) which not only should hold for software functional units but also for tasks or activities. In the case of this kata the API fortunately is obvious. Just one function is needed: string ToRoman(int arabic). And it lives in a class ArabicRomanConversions. Now what about acceptance test cases? There are hardly any stated in the kata descriptions. Roman numerals are explained, but no specific test cases from the point of view of a customer. So I just “invent” some acceptance test cases by picking roman numerals from a wikipedia article. They are supposed to be just “typical examples” without special meaning. Given the acceptance test cases I then try to develop an understanding of the problem domain. I´ll spare you that. The domain is trivial and is explain in almost all kata descriptions. How roman numerals are built is not difficult to understand. What´s more difficult, though, might be to find an efficient solution to convert into them automatically. 2. Solve The usual TDD demonstration skips a solution finding phase. Like the interface exploration it´s mixed in with the implementation. But I don´t think this is how it should be done. I even think this is not how it really works for the people demonstrating TDD. They´re simplifying their true software development process because they want to show a streamlined TDD process. I doubt this is helping anybody. Before you code you better have a plan what to code. This does not mean you have to do “Big Design Up-Front”. It just means: Have a clear picture of the logical solution in your head before you start to build a physical solution (code). Evidently such a solution can only be as good as your understanding of the problem. If that´s limited your solution will be limited, too. Fortunately, in the case of this kata your understanding does not need to be limited. Thus the logical solution does not need to be limited or preliminary or tentative. That does not mean you need to know every line of code in advance. It just means you know the rough structure of your implementation beforehand. Because it should mirror the process described by the logical or conceptual solution. Here´s my solution approach: The arabic “encoding” of numbers represents them as an ordered set of powers of 10. Each digit is a factor to multiply a power of ten with. The “encoding” 123 is the short form for a set like this: {1*10^2, 2*10^1, 3*10^0}. And the number is the sum of the set members. The roman “encoding” is different. There is no base (like 10 for arabic numbers), there are just digits of different value, and they have to be written in descending order. The “encoding” XVI is short for [10, 5, 1]. And the number is still the sum of the members of this list. The roman “encoding” thus is simpler than the arabic. Each “digit” can be taken at face value. No multiplication with a base required. But what about IV which looks like a contradiction to the above rule? It is not – if you accept roman “digits” not to be limited to be single characters only. Usually I, V, X, L, C, D, M are viewed as “digits”, and IV, IX etc. are viewed as nuisances preventing a simple solution. All looks different, though, once IV, IX etc. are taken as “digits”. Then MCMLIV is just a sum: M+CM+L+IV which is 1000+900+50+4. Whereas before it would have been understood as M-C+M+L-I+V – which is more difficult because here some “digits” get subtracted. Here´s the list of roman “digits” with their values: {1, I}, {4, IV}, {5, V}, {9, IX}, {10, X}, {40, XL}, {50, L}, {90, XC}, {100, C}, {400, CD}, {500, D}, {900, CM}, {1000, M} Since I take IV, IX etc. as “digits” translating an arabic number becomes trivial. I just need to find the values of the roman “digits” making up the number, e.g. 1954 is made up of 1000, 900, 50, and 4. I call those “digits” factors. If I move from the highest factor (M=1000) to the lowest (I=1) then translation is a two phase process: Find all the factors Translate the factors found Compile the roman representation Translation is just a look-up. Finding, though, needs some calculation: Find the highest remaining factor fitting in the value Remember and subtract it from the value Repeat with remaining value and remaining factors Please note: This is just an algorithm. It´s not code, even though it might be close. Being so close to code in my solution approach is due to the triviality of the problem. In more realistic examples the conceptual solution would be on a higher level of abstraction. With this solution in hand I finally can do what TDD advocates: find and prioritize test cases. As I can see from the small process description above, there are two aspects to test: Test the translation Test the compilation Test finding the factors Testing the translation primarily means to check if the map of factors and digits is comprehensive. That´s simple, even though it might be tedious. Testing the compilation is trivial. Testing factor finding, though, is a tad more complicated. I can think of several steps: First check, if an arabic number equal to a factor is processed correctly (e.g. 1000=M). Then check if an arabic number consisting of two consecutive factors (e.g. 1900=[M,CM]) is processed correctly. Then check, if a number consisting of the same factor twice is processed correctly (e.g. 2000=[M,M]). Finally check, if an arabic number consisting of non-consecutive factors (e.g. 1400=[M,CD]) is processed correctly. I feel I can start an implementation now. If something becomes more complicated than expected I can slow down and repeat this process. 3. Implement First I write a test for the acceptance test cases. It´s red because there´s no implementation even of the API. That´s in conformance with “TDD lore”, I´d say: Next I implement the API: The acceptance test now is formally correct, but still red of course. This will not change even now that I zoom in. Because my goal is not to most quickly satisfy these tests, but to implement my solution in a stepwise manner. That I do by “faking” it: I just “assume” three functions to represent the transformation process of my solution: My hypothesis is that those three functions in conjunction produce correct results on the API-level. I just have to implement them correctly. That´s what I´m trying now – one by one. I start with a simple “detail function”: Translate(). And I start with all the test cases in the obvious equivalence partition: As you can see I dare to test a private method. Yes. That´s a white box test. But as you´ll see it won´t make my tests brittle. It serves a purpose right here and now: it lets me focus on getting one aspect of my solution right. Here´s the implementation to satisfy the test: It´s as simple as possible. Right how TDD wants me to do it: KISS. Now for the second equivalence partition: translating multiple factors. (It´a pattern: if you need to do something repeatedly separate the tests for doing it once and doing it multiple times.) In this partition I just need a single test case, I guess. Stepping up from a single translation to multiple translations is no rocket science: Usually I would have implemented the final code right away. Splitting it in two steps is just for “educational purposes” here. How small your implementation steps are is a matter of your programming competency. Some “see” the final code right away before their mental eye – others need to work their way towards it. Having two tests I find more important. Now for the next low hanging fruit: compilation. It´s even simpler than translation. A single test is enough, I guess. And normally I would not even have bothered to write that one, because the implementation is so simple. I don´t need to test .NET framework functionality. But again: if it serves the educational purpose… Finally the most complicated part of the solution: finding the factors. There are several equivalence partitions. But still I decide to write just a single test, since the structure of the test data is the same for all partitions: Again, I´m faking the implementation first: I focus on just the first test case. No looping yet. Faking lets me stay on a high level of abstraction. I can write down the implementation of the solution without bothering myself with details of how to actually accomplish the feat. That´s left for a drill down with a test of the fake function: There are two main equivalence partitions, I guess: either the first factor is appropriate or some next. The implementation seems easy. Both test cases are green. (Of course this only works on the premise that there´s always a matching factor. Which is the case since the smallest factor is 1.) And the first of the equivalence partitions on the higher level also is satisfied: Great, I can move on. Now for more than a single factor: Interestingly not just one test becomes green now, but all of them. Great! You might say, then I must have done not the simplest thing possible. And I would reply: I don´t care. I did the most obvious thing. But I also find this loop very simple. Even simpler than a recursion of which I had thought briefly during the problem solving phase. And by the way: Also the acceptance tests went green: Mission accomplished. At least functionality wise. Now I´ve to tidy up things a bit. TDD calls for refactoring. Not uch refactoring is needed, because I wrote the code in top-down fashion. I faked it until I made it. I endured red tests on higher levels while lower levels weren´t perfected yet. But this way I saved myself from refactoring tediousness. At the end, though, some refactoring is required. But maybe in a different way than you would expect. That´s why I rather call it “cleanup”. First I remove duplication. There are two places where factors are defined: in Translate() and in Find_factors(). So I factor the map out into a class constant. Which leads to a small conversion in Find_factors(): And now for the big cleanup: I remove all tests of private methods. They are scaffolding tests to me. They only have temporary value. They are brittle. Only acceptance tests need to remain. However, I carry over the single “digit” tests from Translate() to the acceptance test. I find them valuable to keep, since the other acceptance tests only exercise a subset of all roman “digits”. This then is my final test class: And this is the final production code: Test coverage as reported by NCrunch is 100%: Reflexion Is this the smallest possible code base for this kata? Sure not. You´ll find more concise solutions on the internet. But LOC are of relatively little concern – as long as I can understand the code quickly. So called “elegant” code, however, often is not easy to understand. The same goes for KISS code – especially if left unrefactored, as it is often the case. That´s why I progressed from requirements to final code the way I did. I first understood and solved the problem on a conceptual level. Then I implemented it top down according to my design. I also could have implemented it bottom-up, since I knew some bottom of the solution. That´s the leaves of the functional decomposition tree. Where things became fuzzy, since the design did not cover any more details as with Find_factors(), I repeated the process in the small, so to speak: fake some top level, endure red high level tests, while first solving a simpler problem. Using scaffolding tests (to be thrown away at the end) brought two advantages: Encapsulation of the implementation details was not compromised. Naturally private methods could stay private. I did not need to make them internal or public just to be able to test them. I was able to write focused tests for small aspects of the solution. No need to test everything through the solution root, the API. The bottom line thus for me is: Informed TDD produces cleaner code in a systematic way. It conforms to core principles of programming: Single Responsibility Principle and/or Separation of Concerns. Distinct roles in development – being a researcher, being an engineer, being a craftsman – are represented as different phases. First find what, what there is. Then devise a solution. Then code the solution, manifest the solution in code. Writing tests first is a good practice. But it should not be taken dogmatic. And above all it should not be overloaded with purposes. And finally: moving from top to bottom through a design produces refactored code right away. Clean code thus almost is inevitable – and not left to a refactoring step at the end which is skipped often for different reasons.   PS: Yes, I have done this kata several times. But that has only an impact on the time needed for phases 1 and 2. I won´t skip them because of that. And there are no shortcuts during implementation because of that.

    Read the article

  • .NET - refactoring code

    - by w0051977
    I have inherited and now further develop a large application consisting of an ASP.NET application, VB6 and VB.NET application. The software was poorly written. I am trying to refactor the code as I go along. The changes I am making are not live (they are contained in a folder on my development machine). This is proving to be time consuming and I am doing this along side other work which is the prioritiy. My question is: is this a practical approach or is there a better methodology for refactoring code? I don't have any experience with version control software or source control software and I am wandering if this is what I am missing. I am a sole developer.

    Read the article

  • How to teach computer science ?

    - by proferikson
    I am just starting to teach computer science. It's just the basic level. I'm finding that I sometimes don't know how to approach topics in a way that lets students easily understand. I've found that the best thing is to use analogies. A terrible one, that worked well, was presenting recursion by using the process of eating a burger. (You eat one bite, then you eat the rest). Interesting real world examples do wonders here. I'd like to know the techniques you have used for teaching, or teachers you've had have used that proved particularly effective. I'd also like to know your best analogies and examples for topics in CS101, especially for those harder to grasp topics. I'm teaching the class in Java (as required).

    Read the article

  • Should I add parameters to instance methods that use those instance fields as parameters?

    - by john smith optional
    I have an instance method that uses instance fields in its work. I can leave the method without that parameters as they're available to me, or I can add them to the parameter list, thus making my method more "generic" and not reliable on the class. On the other hand, additional parameters will be in parameters list. Which approach is preferable and why? Edit: at the moment I don't know if my method will be public or private. Edit2: clarification: both method and fields are instance level.

    Read the article

  • Java regex patterns - compile time constants or instance members?

    - by KepaniHaole
    Currently, I have a couple of singleton objects where I'm doing matching on regular expressions, and my Patterns are defined like so: class Foobar { private final Pattern firstPattern = Pattern.compile("some regex"); private final Pattern secondPattern = Pattern.compile("some other regex"); // more Patterns, etc. private Foobar() {} public static Foobar create() { /* singleton stuff */ } } But I was told by someone the other day that this is bad style, and Patterns should always be defined at the class level, and look something like this instead: class Foobar { private static final Pattern FIRST_PATTERN = Pattern.compile("some regex"); private static final Pattern SECOND_PATTERN = Pattern.compile("some other regex"); // more Patterns, etc. private Foobar() {} public static Foobar create() { /* singleton stuff */ } } The lifetime of this particular object isn't that long, and my main reason for using the first approach is because it doesn't make sense to me to hold on to the Patterns once the object gets GC'd. Any suggestions / thoughts?

    Read the article

  • "44 Tips" in PHP Magazin and Other NetBeans IDE Screencasts

    - by Geertjan
    My recent YouTube series "44 Tips for Front End Web Devs" (part 1, part 2) has been picked up by PHP Magazin: http://phpmagazin.de/news/Frontend-Entwicklung-mit-NetBeans-IDE-168339 Great. I'm working on more screencasts like that, from different angles. For example, one will methodically explain each and every window in NetBeans IDE; another will step through the creation of an application from conception to deployment; while another will focus on the NetBeans IDE extension points and how easily they can be used to add new features to NetBeans IDE. The screencast approach has, I think, a lot of advantages. They take less time to make and they seem to be more effective, in several ways, than tutorials. Hearing someone talk through a scenario seems to also put things in a clearer perspective than when you have everything written out in a document, where small details get lost and diversions are more difficult to make. Anyway, onwards to more screencasts. Any special requests?

    Read the article

  • Narrowing down my large keyword list for new PPC campaign

    - by gijoemike
    If I have a list of 100 keywords that are candidates for a PPC campaign (my list is actually 1000+). What is the best approach to narrowing this down to the top 5-10 keywords I should start with? I'm also wondering if my top chosen keywords for PPC campaign should be my main keywords for SEO site optimization for organic traffic. I also have another question on this site asking: How does one estimate where a competitor is getting most of their traffic from? Thanks. The website isn't created yet, but will be up in January.

    Read the article

  • Vector Graphics in DirectX

    - by Doug
    I'm curious as to people's thoughts on the best way to use vector graphics in a directX game instead of rasterized textures(think Super Meat Boy). I want to remain resolution independent and don't want to downscale/upscale rasterized graphics. Also the idea would be for all assets to be vector graphics(again think Super Meat Boy). I've looked at Valve's paper "Improved Alpha-Tested Magnification for Vector Textures and Special Effects" and also looked at using shaders http://http.developer.nvidia.com/GPUGems3/gpugems3_ch25.html. Wondering if anyone has done something similar or an alternate approach. Cheers

    Read the article

  • Benchmark Against 160 Identity and Access Programs Worldwide

    - by Naresh Persaud
    Aberdeen documented the results of taking a "platform approach" to Identity and Access Management in a recent study - you can read the complete report here. Aberdeen has created an assessment tool that allows organizations to take a similar survey and compare their performance to companies surveyed in the original report. The assessment takes 5 minutes to complete and provides a complete printable report with a statistical comparison for each performance indicator. In addition, the assessment report provides guidance on improvements that organizations can take to achieve better results based on the benchmark. Take the assessment by clicking here.  You can also attend one of the physical events and discuss the results of the survey with Derek Brink the author. In the events, Derek discusses how organizations take advantage of the report. Register here. 

    Read the article

  • How to optimize collision detection

    - by Niklas
    I am developing a 2D Java Game with LibGDX. This is what it kinda looks like (simplified): The big black circle is the player, which you can move by tilting the smartphone. The red circles and blue rectangles are enemies, which will move from the right of the screen to the left. The player has to avoid crashing into them. Right now I am checking in the Game Loop every enemy against the player, whether they collide or not. This seems kinda inefficient to me, but I don't know how to improve it. I have tried the Quadtree approach, but it did not really work. The player could easily glitch through enemies and the collision was not detected. Unfortunately, I have destroyed the Quadtree implementation. I used this [tutorial/blog] as my Quadtree implementation(http://gamedevelopment.tutsplus.com/tutorials/quick-tip-use-quadtrees-to-detect-likely-collisions-in-2d-space--gamedev-374).

    Read the article

  • Approaching Java/JVM internals

    - by spinning_plate
    I've programmed in Java for about 8 years and I know the language quite well as a developer, but my goal is to deepen my knowledge of the internals. I've taken undergraduate courses in PL design, but they were very broad academic overviews (in Scheme, IIRC). Can someone suggest a route to start delving into the details? Specifically, are there particular topics (say, garbage collection) that might be more approachable or be a good starting point? Is there a decent high-level book on the internals of the JVM and the design of the Java programming language? My current approach is going to be to start with the JVM spec and research as needed.

    Read the article

  • RightNow CX @ OpenWorld: What to Experience

    - by Tony Berk
    We want to welcome our RightNow CX customers to Oracle OpenWorld next week. Get ready for a great week and a whole new experience! For a high level overview of what is going on during the week, please review these previous posts: Is There a Cloud Over OpenWorld? and What to "CRM" in San Francisco? CRM Highlights for OpenWorld '12. Also, don't forget you can add on the Customer Experience Summit @ OpenWorld to make your week even more complete and get involved with the Experience Revolution! Below is a highlight of only some of the RightNow related sessions at OpenWorld. Please use OpenWorld Schedule Builder or check the OpenWorld Content Catalog for all of the session details and any time or location changes. Tip: Pre-enrolled session registrants via Schedule Builder are allowed into the session rooms before anyone else, so Schedule Builder will guarantee you a seat. Many of the sessions below will likely be at capacity. No better way to start off than hearing where Oracle RightNow is going! Oracle RightNow CX Cloud Service Vision and Roadmap (CON9764) - Oct 1, 10:45 AM. Oracle RightNow CX Cloud Service combines Web, social, and contact center experiences for a unified, cross-channel service solution in the cloud, enabling organizations to increase sales and adoption, build trust, strengthen relationships, and reduce costs and effort. Come to this session to hear from David Vap and his team of Oracle experts about where the product is going and how Oracle is committed to accelerating the pace of innovation and value to its customers. Interested in the Cloud and want to know why some leading CIOs are moving to the cloud? You can hear first hand from CIOs from Emerson, Intuit and Overstock.com: CIOs and Governance in the Cloud (CON9767) - Oct 3, 11:45 AM.   And of course there are a number of sessions that drill down into more specific areas. Here are just a few: Deliver Outstanding Customer Experiences: Oracle RightNow Dynamic Agent Desktop Cloud Service (CON9771) - Oct 1, 4:45 PM. This session covers how companies have delivered exceptional customer experiences and how the Oracle RightNow Dynamic Agent Desktop Cloud Service roadmap will evolve in the future. The Oracle RightNow Contact Center Experience suite includes incident management, knowledge, guided processes, and other service capabilities to unify the customer experience across channels. Come learn about the powerful tools that enable even your junior agents to consistently provide outstanding service across all customer interaction channels. Self-Service in the Age of Data Intimacy (CON11516) - Oct 1, 3:15. Even though businesses are generating more and more data around their relationships and interactions with customers, very little of the information a business generates ends up available to the contact center and even less is made available to the online service experience. The generic one-size-fits-all approach that typifies most online service experiences ultimately fails to address all user needs, and that failure ultimately leads to the continued use of high-cost agent-assisted channels for low-value interactions. This session introduces Oracle RightNow Web Experience’s Virtual Assistant and discusses how you can deliver rich, engaging, highly personalized experiences with the quality of agent-assisted service at a much lower cost. Improve Chat Experiences: Best Practices for Chat Pilots and Deployments (CON11517) - Oct 1, 4:45 PM. Today’s organizations are challenged to grow revenue and retain customers with fewer resources, and many have turned to chat as an approach to improving the customer experience, increasing sales conversions, and reducing costs at the same time. From setting goals and metrics and training staff to customizing and tuning the solution, this session provides best practices and lessons learned from a broad set of implementations to help you get the most out of your chat solution. Differentiated Experience with Web Service (CON9770) - Oct 2, 1:15 PM. A reputation for excellent customer service can differentiate your brand and drive revenue. In this session, learn how to develop that reputation by transforming your online self-service into a highly interactive, branded customer experience. See live examples of how Oracle RightNow Web Experience has helped customers deliver on their Web service strategies. Unifying the Agent’s Engagement Console (CON11518) - Oct 2, 1:15 PM. Does your customer experience suffer because your agents are toggling between multiple tools? Do your agent productivity and morale suffer as well? Come to this session to learn how Oracle RightNow CX Cloud Service seamlessly unifies these disparate systems into a single engagement console. Regardless of channel, powerful adaptive tools consistently guide agents across contextually aware personalized workflows. Great agent experiences drive great customer experiences. Oracle RightNow CX Cloud Service and the Oracle Customer Experience Portfolio (CON9775) - Oct 3, 10:15 AM. This session covers how Oracle’s integrated suite of customer experience (CX) products fits with the Oracle CX portfolio of products (Oracle Fusion Customer Relationship Management; the Oracle ATG, Oracle Endeca, and Oracle Knowledge product families; and Oracle Business Intelligence) to increase revenues, strengthen customer relationships, and reduce costs across the entire end-to-end customer lifecycle for companies that sell to consumers and those that sell to businesses. Greater Insights from Customer Engagements (CON9773) Oct 4, 12:45 PM. In this session, hear how to leverage service interaction insights, customer feedback, and segmented service engagements to improve the customer experience. Discover how customers, such as J&P Cycles, learn and take action based on business insights gained through their customer engagements. Again, these are just some of the sessions, so check out the Content Catalog for details on Knowledge Management, Customization, Integration and more in the Oracle Develop stream for Customer Experience. Be sure to visit the Oracle DEMOgrounds in the Moscone West Exhibit Hall. If this is your first OpenWorld, welcome! If you are returning, hi again and enjoy!

    Read the article

  • SSIS: Building SQL databases on-the-fly using concatenated SQL scripts

    - by DrJohn
    Over the years I have developed many techniques which help automate the whole SQL Server build process. In my current process, where I need to build entire OLAP data marts on-the-fly, I make regular use of a simple but very effective mechanism to concatenate all the SQL Scripts together from my SSMS (SQL Server Management Studio) projects. This proves invaluable because in two clicks I can redeploy an entire SQL Server database with all tables, views, stored procedures etc. Indeed, I can also use the concatenated SQL scripts with SSIS to build SQL Server databases on-the-fly. You may be surprised to learn that I often redeploy the database several times per day, or even several times per hour, during the development process. This is because the deployment errors are logged and you can quickly see where SQL Scripts have object dependency errors. For example, after changing a table structure you may have forgotten to change any related views. The deployment log immediately points out all the objects which failed to build so you can fix and redeploy the database very quickly. The alternative approach (i.e. doing changes in the database directly using the SSMS UI) would require you to check all dependent objects before making changes. The chances are that you will miss something and wonder why your app returns the wrong data – a common problem caused by changing a table without re-creating dependent views. Using SQL Projects in SSMS A great many developers fail to make use of SQL Projects in SSMS (SQL Server Management Studio). To me they are invaluable way of organizing your SQL Scripts. The screenshot below shows a typical SSMS solution made up of several projects – one project for tables, another for views etc. The key point is that the projects naturally fall into the right order in file system because of the project name. The number in the folder or file name ensures that the projects the SQL scripts are concatenated together in the order that they need to be executed. Hence the script filenames start with 100, 110 etc. Concatenating SQL Scripts To concatenate the SQL Scripts together into one file, I use notepad.exe to create a simple batch file (see example screenshot) which uses the TYPE command to write the content of the SQL Script files into a combined file. As the SQL Scripts are in several folders, I simply use several TYPE command multiple times and append the output together. If you are unfamiliar with batch files, you may not know that the angled bracket (>) means write output of the program into a file. Two angled brackets (>>) means append output of this program into a file. So the command-line DIR > filelist.txt would write the content of the DIR command into a file called filelist.txt. In the example shown above, the concatenated file is called SB_DDS.sql If, like me you place the concatenated file under source code control, then the source code control system will change the file's attribute to "read-only" which in turn would cause the TYPE command to fail. The ATTRIB command can be used to remove the read-only flag. Using SQLCmd to execute the concatenated file Now that the SQL Scripts are all in one big file, we can execute the script against a database using SQLCmd using another batch file as shown below: SQLCmd has numerous options, but the script shown above simply executes the SS_DDS.sql file against the SB_DDS_DB database on the local machine and logs the errors to a file called SB_DDS.log. So after executing the batch file you can simply check the error log to see if your database built without a hitch. If you have errors, then simply fix the source files, re-create the concatenated file and re-run the SQLCmd to rebuild the database. This two click operation allows you to quickly identify and fix errors in your entire database definition.Using SSIS to execute the concatenated file To execute the concatenated SQL script using SSIS, you simply drop an Execute SQL task into your package and set the database connection as normal and then select File Connection as the SQLSourceType (as shown below). Create a file connection to your concatenated SQL script and you are ready to go.   Tips and TricksAdd a new-line at end of every fileThe most common problem encountered with this approach is that the GO statement on the last line of one file is placed on the same line as the comment at the top of the next file by the TYPE command. The easy fix to this is to ensure all your files have a new-line at the end.Remove all USE database statementsThe SQLCmd identifies which database the script should be run against.  So you should remove all USE database commands from your scripts - otherwise you may get unintentional side effects!!Do the Create Database separatelyIf you are using SSIS to create the database as well as create the objects and populate the database, then invoke the CREATE DATABASE command against the master database using a separate package before calling the package that executes the concatenated SQL script.    

    Read the article

  • deciphering columnar transposition cipher

    - by Arfan M
    I am looking for an idea on how to decipher a columnar transposition cipher without knowing the key or the length of the key. When I take the cipher text as input to my algorithm I will guess the length of the key to be the factors of the length of the cipher text. I will take the first factor suppose the length was 20 letters so I will take 2*10 (2 rows and 10 columns). Now I want to arrange the cipher text in the columns and read it row wise to see if there is any word forming and match it with a dictionary if it is something sensible. If it matches the dictionary then it means it is in correct order or else I want to know how to make other combinations of the columns and read the string again row wise. Please suggest another approach that is more efficient.

    Read the article

  • Is it possible to migrate struts/spring based application to GWT?

    - by Satish Pandey
    I am using the combination of spring, spring-security, struts and iBatis in my application. Now I am looking to migrate the struts UI to GWT. The new combination must be spring, spring-security, GWT and iBatis. I applied a layered approach to develop my application. In Controller/UI layer i am using Struts. I want to replace struts and use GWT in Controller/UI layer. Is is possible to use GWT without affecting another layers DAO/BL/SL?

    Read the article

  • How should I invoke a physics engine?

    - by ymfoi
    I'm new to writing games. I'm planning to write a 2D battle game which may require an physics engine. Suppose I've written one, but how can I combine it with the main routine of my game? Should I attach it directly to the graphics render routine or put it in an individual thread? I've spent much time looking for some common approach, but found nothing. So can you reveal some basics idea for me, a newbie? Thanks! P.S. There're many other problems I have to deal with if I choose to start a separate thread for the physics engine, for example, the lock problem, while from my intuition, I guess I'd better separate the render and the physics engine.

    Read the article

  • With a small development team, how do you organize second-level support?

    - by Lenny222
    Say, you have a team of 5 developers and your inhouse customers demand a reasonable support availability of say 5 days a week, 9am-6pm. I can imagine the following scenarios: the customers approach the same guy, every time. Downside: single point of failure, if the guy is unavailable. each developer is assigned one week of support duty. Downside: how to you distribute the work evenly in times of planned (vacation) and unplanned (sickness) unavailability? each developer is assigned one day of support duty. Downside: similar to above, but not as bad. a randomly picked developer handles the support request. Downside: maybe not fair, see above. What is your experience?

    Read the article

  • Most efficient arc for developing cross-browser support?

    - by Chris Hasbrouck
    I'm curious to hear what approach people take to planning for cross-browser support when developing a website. There are generally two approaches I've seen developers take in their workflow: -optimize for webkit then apply hacks for IE7-9, or -optimize for IE7-8 then apply newer features for IE9/webkit Basically starting at the front of technology and working toward the back, or starting at the back of technology and working toward the front. How do you do things? What advantages or disadvantage do you perceive in the different way of doing things wrt to developing cross-browser support?

    Read the article

  • Email Content creation | Proper design

    - by Umesh Awasthi
    Working on an E commerce application where we need to send so many email to customer like Registration email Forget Password Order placed There are many other emails that can be sent, I already have emailService in place which is responsible for sending email and It needs an Email object, Everything is working find, but I am struck at one point and not sure how best this can be done. We need to create content so as it can be passed to emailService and not sure how to design this. For example, in Customer registration, I have a customerFacade which is working between Controller and ServiceLayer, I just want to delegate this Email Content creation work away from Facade layer and to make it more flexible. Currently I am creating Registration email content inside customerFacade and some how I am not liking this way, since that means for each email, I need to create content in respective Facade. What is best way to go or current approach is fine enough?

    Read the article

  • Determine percentage of screen covered by an object without using frustum culling

    - by Meltac
    On the CPU-side of an 3D first-person / ego perspective game I need to check whether what the players currently sees on screen is the inside of a box object defined by world space coordinates (the player might be outside of that box but on screen sees only/mostly the inside of the box, or vice-versa, looks from within the box to the outside). The "casual" way of performing such a check would incorporate frustum culling but such an approach would be hard to achieve with my given set of engine parameters which I'd like to avoid if there is a simpler way. What I actually have at the point where I would like to do the check (high-level script on CPU, not GPU side): Camera world position Camera direction Camera FOV Two Box corner world coordinates (left-bottom-front, right-top-back) What I do not have right away: View frustrum definition (near/far plane or say 6 planes defining frustum) Any specific pixel information (uv, view space position, depth or the like) What I would like to calculate: Percentage of screen "covered" by box. Any hints on how to perform such calculation?

    Read the article

  • Possible to set equal height for divs in pairs but only if browser width 960 or wider?

    - by CreateSean
    I'm working on a responsive site where I've found that I've got pairs of divs with heights that I would like to be equal, but only if the browser width is equal to or greater than 960px. Any smaller than that and the divs stack so different heights do not make a difference. DIV 1 | DIV 2 DIV 3 | DIV 4 DIV 5 | DIV 6 DIV 7 | DIV 8 Based on the above set up, Div 1 and Div 2 need to be equal height as do Div 3 and Div 4, but both pairs do not need to be equal to each other. i.e. the pair sets can have different heights but each pair must be equal. Is this possible and if so what is the best approach to take? My javascript/jQuery is rather elementary. I'm sure I could do equal heights alone, but with the pair sets I'm not sure and then adding in the need to set this to only happen if the browser is 960 or wider and I'm lost.

    Read the article

  • How to find Sub-trees in non-binary tree

    - by kenny
    I have a non-binary tree. I want to find all "sub-trees" that are connected to root. Sub-tree is a a link group of tree nodes. every group is colored in it's own color. What would be be the best approach? Run recursion down and up for every node? The data structure of every treenode is a list of children, list of parents. (the type of children and parents are treenodes) Clarification: Group defined if there is a kind of "closure" between nodes where root itself is not part of the closure. As you can see from the graph you can't travel from pink to other nodes (you CAN NOT use root). From brown node you can travel to it's child so this form another group. Finally you can travel from any cyan node to other cyan nodes so the form another group

    Read the article

  • Object Oriented programming on 8-bit MCU Case Study

    - by Calvin Grier
    I see that there's a lot of questions related to OO Programming here. I'm actually trying to find a specific resource related to embedded OO approaches for an 8 bit MCU. Several years back (maybe 6) I was looking for material related to Object Oriented programming for resource constrained 8051 microprocessors. I found an article/website with a case history of a design group that used a very small RAM part, and implemented many Object based constructs during their C design and development. I believe it was an 8051. The project was a success, and managed to stay inside the very small ROM/RAM they had available. I'm attempting to find it again, but Google can't locate it. The article was well written, and recommended a "mixed" approach using C methods for inheritance and encapsulation - if I recall correctly. Can anyone help me locate this article?

    Read the article

  • Best Practice - XML To Excel

    - by MemLeak
    I've to read a big XML file with a lot of information. Afterwards I extract the needed information (~20 Points(columns) / ~80 relevant Data (rows, some of them with subdatasets) and write them out in a Excel File. My Question is how to handle the extraction (of unused Data) part, should I copy the whole file and delete the unused parts, and then write it to excel or is it a good approach to create Objects for each column? should I write the whole xml to excel and start to delete rows in excel? What would be performant and a acceptable solution?

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >