Search Results

Search found 7513 results on 301 pages for 'actual'.

Page 112/301 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • How should I architect my Model and Data Access layer objects in my website?

    - by Robin Winslow
    I've been tasked with designing Data layer for a website at work, and I am very interested in architecture of code for the best flexibility, maintainability and readability. I am generally acutely aware of the value in completely separating out my actual Models from the Data Access layer, so that the Models are completely naive when it comes to Data Access. And in this case it's particularly useful to do this as the Models may be built from the Database or may be built from a Soap web service. So it seems to me to make sense to have Factories in my data access layer which create Model objects. So here's what I have so far (in my made-up pseudocode): class DataAccess.ProductsFromXml extends DataAccess.ProductFactory {} class DataAccess.ProductsFromDatabase extends DataAccess.ProductFactory {} These then get used in the controller in a fashion similar to the following: var xmlProductCreator = DataAccess.ProductsFromXml(xmlDataProvider); var databaseProductCreator = DataAccess.ProductsFromXml(xmlDataProvider); // Returns array of Product model objects var XmlProducts = databaseProductCreator.Products(); // Returns array of Product model objects var DbProducts = xmlProductCreator.Products(); So my question is, is this a good structure for my Data Access layer? Is it a good idea to use a Factory for building my Model objects from the data? Do you think I've misunderstood something? And are there any general patterns I should read up on for how to write my data access objects to create my Model objects?

    Read the article

  • Caching strategies for entities and collections

    - by Rob West
    We currently have an application framework in which we automatically cache both entities and collections of entities at the business layer (using .NET cache). So the method GetWidget(int id) checks the cache using a key GetWidget_Id_{0} before hitting the database, and the method GetWidgetsByStatusId(int statusId) checks the cache using GetWidgets_Collections_ByStatusId_{0}. If the objects are not in the cache they are retrieved from the database and added to the cache. This approach is obviously quick for read scenarios, and as a blanket approach is quick for us to implement, but requires large numbers of cache keys to be purged when CRUD operations are carried out on entities. Obviously as additional methods are added this impacts performance and the benefits of caching diminish. I'm interested in alternative approaches to handling caching of collections. I know that NHibernate caches a list of the identifiers in the collection rather than the actual entities. Is this an approach other people have tried - what are the pros and cons? In particular I am looking for options that optimise performance and can be implemented automatically through boilerplate generated code (we have our own code generation tool). I know some people will say that caching needs to be done by hand each time to meet the needs of the specific situation but I am looking for something that will get us most of the way automatically.

    Read the article

  • ASP.NET MVC 2 from Scratch &ndash; Part 1 Listing Data from Database

    - by Max
    Part 1 - Listing Data from Database: Let us now learn ASP.NET MVC 2 from Scratch by actually developing a front end website for the Chinook database, which is an alternative to the traditional Northwind database. You can get the Chinook database from here. As always the best way to learn something is by working on it and doing something. The Chinook database has the following schema, a quick look will help us implementing the application in a efficient way. Let us first implement a grid view table with the list of Employees with some details, this table also has the Details, Edit and Delete buttons on it to do some operations. This is series of post will concentrate on creating a simple CRUD front end for Chinook DB using ASP.NET MVC 2. In this post, we will look at listing all the possible Employees in the database in a tabular format, from which, we can then edit and delete them as required. In this post, we will concentrate on setting up our environment and then just designing a page to show a tabular information from the database. We need to first setup the SQL Server database, you can download the required version and then set it up in your localhost. Then we need to add the LINQ to SQL Classes required for us to enable interaction with our database. Now after you do the above step, just use your Server Explorer in VS 2010 to actually navigate to the database, expand the tables node and then drag drop all the tables onto the Object Relational Designer space and you go you will have the tables visualized as classes. As simple as that. Now for the purpose of displaying the data from Employee in a table, we will show only the EmployeeID, Firstname and lastname. So let us create a class to hold this information. So let us add a new class called EmployeeList to the ViewModels. We will send this data model to the View and this can be displayed in the page. public class EmployeeList { public int EmployeeID { get; set; } public string Firstname { get; set; } public string Lastname { get; set; } public EmployeeList(int empID, string fname, string lname) { this.EmployeeID = empID; this.Firstname = fname; this.Lastname = lname; } } Ok now we have got the backend ready. Let us now look at the front end view now. We will first create a master called Site.Master and reuse it across the site. The Site.Master content will be <%@ Master Language="C#" AutoEventWireup="true" CodeBehind="Site.Master.cs" Inherits="ChinookMvcSample.Views.Shared.Site" %>   <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head id="Head1" runat="server"> <title></title> <style type="text/css"> html { background-color: gray; } .content { width: 880px; position: relative; background-color: #ffffff; min-width: 880px; min-height: 800px; float: inherit; text-align: justify; } </style> <script src="../../Scripts/jquery-1.4.1.min.js" type="text/javascript"></script> <asp:ContentPlaceHolder ID="head" runat="server"> </asp:ContentPlaceHolder> </head> <body> <center> <h1> My Website</h1> <div class="content"> <asp:ContentPlaceHolder ID="body" runat="server"> </asp:ContentPlaceHolder> </div> </center> </body> </html> The backend Site.Master.cs does not contain anything. In the actual Index.aspx view, we add the code to simply iterate through the collection of EmployeeList that was sent to the View via the Controller. So in the top of the Index.aspx view, we have this inherits which says Inherits="System.Web.Mvc.ViewPage<IEnumerable<ChinookMvcSample.ViewModels.EmployeeList>>" In this above line, we dictate that the page is consuming a IEnumerable collection of EmployeeList. So once we specify this and compile the project. Then in our Index.aspx page, we can consume the EmployeeList object and access all its methods and properties. <table class="styled" cellpadding="3" border="0" cellspacing="0"> <tr> <th colspan="3"> </th> <th> First Name </th> <th> Last Name </th> </tr> <% foreach (var item in Model) { %> <tr> <td align="center"> <%: Html.ActionLink("Edit", "Edit", new { id = item.EmployeeID }, new { id = "links" })%> </td> <td align="center"> <%: Html.ActionLink("Details", "Details", new { id = item.EmployeeID }, new { id = "links" })%> </td> <td align="center"> <%: Html.ActionLink("Delete", "Delete", new { id = item.EmployeeID }, new { id = "links" })%> </td> <td> <%: item.Firstname %> </td> <td> <%: item.Lastname %> </td> </tr> <% } %> <tr> <td colspan="5"> <%: Html.ActionLink("Create New", "Create") %> </td> </tr> </table> The Html.ActionLink is a Html Helper to a create a hyperlink in the page, in the one we have used, the first parameter is the text that is to be used for the hyperlink, second one is the action name, third one is the parameter to be passed, last one is the attributes to be added while the hyperlink is rendered in the page. Here we are adding the id=”links” to the hyperlinks that is created in the page. In the index.aspx page, we add some jQuery stuff add alternate row colours and highlight colours for rows on mouse over. Now the Controller that handles the requests and directs the request to the right view. For the index view, the controller would be public ActionResult Index() { //var Employees = from e in data.Employees select new EmployeeList(e.EmployeeId,e.FirstName,e.LastName); //return View(Employees.ToList()); return View(_data.Employees.Select(p => new EmployeeList(p.EmployeeId, p.FirstName, p.LastName))); } Let us also write a unit test using NUnit for the above, just testing EmployeeController’s Index. DataClasses1DataContext _data; public EmployeeControllerTest() { _data = new DataClasses1DataContext("Data Source=(local);Initial Catalog=Chinook;Integrated Security=True"); }   [Test] public void TestEmployeeIndex() { var e = new EmployeeController(_data); var result = e.Index() as ViewResult; var employeeList = result.ViewData.Model; Assert.IsNotNull(employeeList, "Result is null."); } In the first EmployeeControllerTest constructor, we set the data context to be used while running the tests. And then in the actual test, We just ensure that the View results returned by Index is not null. Here is the zip of the entire solution files until this point. Let me know if you have any doubts or clarifications. Cheers! Have a nice day.

    Read the article

  • Perl: Negative look behind regex question [migrated]

    - by James
    The Perlre in Perldoc didn't go into much detail on negative look around but I tried testing it, and didn't work as expected. I want to see if I can differentiate a C preprocessor macro definition (e.g. #define MAX(X) ....) from actual usage (y = MAX(x);), but it didn't work as expected. my $macroName = 'MAX'; my $macroCall = "y = MAX(X);"; my $macroDef = "# define MAX(X)"; my $boundary = qr{\b$macroName\b}; my $bstr = " MAX(X)"; if($bstr =~ /$boundary/) { print "boundary: $bstr matches: $boundary\n"; } else { print "Error: no match: boundary: $bstr, $boundary\n"; } my $negLookBehind = qr{(?<!define)\b$macroName\b}; if($macroCall =~ /$negLookBehind/) # "y = MAX(X)" matches "(?<!define)\bMAX\b" { print "negative look behind: $macroCall matches: $negLookBehind\n"; } else { print "no match: negative look behind: $macroCall, $negLookBehind\n"; } if($macroDef =~ /$negLookBehind/) # "#define MAX(X)" should not match "(?<!define)\bMAX\b" { print "Error: negative look behind: $macroDef matches: $negLookBehind\n"; } else { print "no match: negative look behind: $macroDef, $negLookBehind\n"; } It seems that both $macroDef and $macroCall seem to match regex /(?<!define)\b$macroName\b/. I backed off from the original /(?<\#)\s*(?<!define)\b$macroName\b/ since that didn't work either. So what did I screw up? Also does Perl allow chaining of multiple look around expressions?

    Read the article

  • The spork/platypus average: shameless self promotion

    - by Roger Hart
    This is the video of presentation I gave at UA Europe and TCUK this year. The actual sub-title was "Content strategy at Red Gate Software", but this heading feels more honest. For anybody who missed it, or is just vaguely interested, here's a link to me talking about de-suckifying the web. You can find the slideshare deck here, too* Watching it back is more than a little embarrassing, and makes me really, really want to do a follow up, so I can do three things: explain the rest of the big web project, now we've done it give some data on the outcome of the content review make a grovelling apology to our marketing guys, who I've been unfairly mean to in a childish effort to look cool There are a whole bunch of other TCUK presentations online, too. You can find them all here: http://tiny.cc/tcuk10_videos I'd particularly recommend Chris Atherton's: "Everything you always wanted to know about psychology and technical communication" - it's full of cool stuff. You should probably also watch David Black's opening keynote, which managed to make my hour of precocious grandstanding look measured, meek, and helpful. He actually makes some interesting points, but you'd basically have to ship Richard Dawkins off to Utah, if you wanted to go further out of your way to aggravate your audience. It does give an engaging account of running a large tech comms project, and raise some questions about how we propose to understand a world where increasing amounts of our stuff gets done by increasingly many increasingly complicated tissues of APIs. Well, sort of. That's what all the notes I made were about, anyway.   *Slideshare ate my fonts. Just so we're clear on this: I'd never use badly-kerned Arial in a presentation. Don't worry.

    Read the article

  • Should I give the answer to a failed interview coding exercise?

    - by GlenH7
    We had a senior level interview candidate fail a nuance of the FizzBuzz question*. I mean, really, utterly, completely, failed the question - not even close. I even coached him through to thinking about using a loop and that 3 and 5 were really worth considering as special cases. He blew it. Just for QA purposes, I gave the same exact question to three teammates; gave them 5 minutes; and then came back to collect their pseudo-code. All of them nailed it and hadn't seen the question before. Two asked what the trick was... On a different logic exercise, the candidate showed some understanding of some of the features available within the language he chose to use (C#). So it's not as if he had never written a line of code. But his logic still stunk. My question is whether or not I should have given him the answer to the logic questions. He knew he blew them, and acknowledged it later in the interview. On the other hand, he never asked for the answer or what I was expecting to see. I know coding exercises can be used to set candidates up for failure (again, see second link from above). And I really tried to help him home in on answering the core of the question. But this was a senior level candidate and Fizz-Buzz is, frankly, ridiculously easy even with accounting for interview jitters. I felt like I should have shown him a way of solving the problem so that he could at least learn from the experience. But again, he didn't ask. What's the right way to handle that situation? *Okay, that's not the link to the actual FizzBuzz question, but it is a good P.SE discussion around FizzBuzz and links to the various aspects of it.

    Read the article

  • Keeping Aspect Screen Ratio While Stays in Center

    - by David Dimalanta
    I sqw and I tried this suggestion on PISTACHIO BRAINSTORMIN* on how to make a good and adaptive screen ration. For every different screen size, let's say I put the perfect circle as a Texture in LibGDX and played it on screen. Here's the blueberry image example and it's perfectly rounded: When I played it on the Google Nexus 7, the circle turn into a slightly oblonng shape, resembling as it was being flatten a bit. Please observe this snapshot below and you can see the blueberry is almost but slightly not perfectly rounded: Now, when I tried the suggested code for aspect ratio, the perfect circle retained but another problem is occured. The problem is that I expecting for a view on center but instead it's been moved to the right offset leaving with a half black screen. This would be look like this: Here is my code using the suggested screen aspect ratio code: Class' Field // Ingredients Needed for Screen Aspect Ratio private static final int VIRTUAL_WIDTH = 720; private static final int VIRTUAL_HEIGHT = 1280; private static final float ASPECT_RATIO = ((float) VIRTUAL_WIDTH)/((float) VIRTUAL_HEIGHT); private Camera Mother_Camera; private Rectangle Viewport; render() // Camera updating... Mother_Camera.update(); Mother_Camera.apply(Gdx.gl10); // Reseting viewport... Gdx.gl.glViewport((int) Viewport.x, (int) Viewport.y, (int) Viewport.width, (int) Viewport.height); // Clear previous frame. Gdx.gl.glClearColor(0, 0, 0, 1); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); show() Mother_Camera = new OrthographicCamera(VIRTUAL_WIDTH, VIRTUAL_HEIGHT); Was this code useful for screen aspect ratio-proportion fixing or it is statically dependent on actual device's width and height? *see http://blog.acamara.es/2012/02/05/keep-screen-aspect-ratio-with-different-resolutions-using-libgdx/#comment-317

    Read the article

  • Can .htaccess slow down a site?

    - by Cody Sharp
    I'm working with a client on an e-commerce website. I implemented clean URLs using .htaccess. I also used .htaccess to solve canonical issues such as redirecting www to non-www and removing index.php from the URL. The website recently began to slow down dramatically, sometimes not even loading. The site is hosted on GoDaddy, and when the client called GoDaddy they told him it was the .htaccess file slowing down the website. I find this highly unlikely because of my past experiences, but I'm not 100% sure. My thinking is that the client's website is most likely on a shared server with a busy neighborhood, thus slowing down the site. It's not always slow, but rather sporadic throughout the day, loading fast at some points and slow at other points in time. Can the .htaccess file slow down a website to a crawl? If so, are there better ways to solve these problems with different rewrite rules and such? Here is what the actual .htaccess file looks like: Options +FollowSymlinks RewriteEngine On RewriteBase / RewriteCond %{HTTP_HOST} ^www.example.net [NC] RewriteRule ^(.*)$ http://example.net/$1 [L,R=301] RewriteRule ^products/([0-9a-zA-Z\_\-]*)\.htm([l]?)$ index.php p=product&product_code=$1 [L] RewriteRule ^catalog/([0-9a-zA-Z\_\-]*)\.htm([l]?)$ index.php p=catalog&catalog_code=$1 [L] RewriteRule ^pages/([0-9a-zA-Z\_\-]*)\.htm([l]?)$ index.php?p=page&page_id=$1 [L] RewriteRule ^index\.htm([l]?)$ index.php?p=home [L] RewriteRule ^site_map\.htm([l]?)$ index.php?p=site_map [L] RewriteCond %{QUERY_STRING} ^p=home$ RewriteRule (.*) ? [R=permanent] I'm a .htaccess and regex novice, so any pointed out mistakes would also help. Thank you.

    Read the article

  • Using HTML5 Today part 2&ndash;Fixing Semantic tags with a Shiv

    - by Steve Albers
    Semantic elements and the Shiv! This is the second entry in the series of demos from the “Using HTML5 Today” talk. For the definitive discussion on unknown elements and the HTML5 Shiv check out Mark Pilgrim’s Dive Into HTML5 online book at http://diveintohtml5.info/semantics.html#unknown-elements Semantic tags increase the meaning and maintainability of your markup, help make your page more computer-readable, and can even provide opportunities for libraries that are written to automagically enhance content using standard tags like <nav>, <header>,  or <footer>. Legacy IE issues However, new HTML5 tags get mangled in IE browsers prior to version 9.  To see this in action, consider this bit of HTML code which includes the new <article> and <header> elements: Viewing this page using the IE9 developer tools (F12) we see that the browser correctly models the hierarchy of tags listed above: But if we switch to IE8 Browser Mode in developer tools things go bad: Did you know that a closing tag could close itself?? The browser loses the hierarchy & closes all of the new tags.  The new tags become unusable and the page structure falls apart. Additionally block-level elements lose their block status, appearing as inline.    The Fix (good) The block-level issue can be resolved by using CSS styling.  Below we set the article, header, and footer tags as block tags. article, header, footer {display:block;} You can avoid the unknown element issue by creating a version of the element in JavaScript before the actual HTML5 tag appears on the page: <script> document.createElement("article"); document.createElement("header"); document.createElement("footer"); </script> The Fix (better) Rather than adding your own JS you can take advantage of a standard JS library such as Remy Sharp’s HTML5 Shiv at http://code.google.com/p/html5shiv/.  By default the Modernizr library includes HTML5 Shiv, so you don’t need to include the shiv code separately if you are using Modernizr.

    Read the article

  • enabling a user (created with adduser command) for lightdm graphical login

    - by Basile Starynkevitch
    I just installed Ubuntu 12.04 AMD64 on a new (empty) hard disk (because the previous crashed) Since I am quite familiar with Debian, I created two accounts with the adduser command. Since I am also having an NFSv3 file system, I explictly gave user ids when creating them (for simplicity, I keep the same user id on the home server, running Debian; the user names contain digits; I'm not using LDAP), e.g. # grep bethy /etc/passwd bethy46:x:501:501:Bethy XXX,,,06123456:/home/bethy:/bin/bash # grep bethy /etc/group bethy64:x:501: # grep bethy /etc/shadow bethy46:$6$vQ-wmuchmorethings-2o/:15479:0:99999:7:: Of course /home/bethy exists The actual user name is slightly different, and I am not showing the real entries (for obvious privacy reasons) However, these users don't appear at graphical login prompt (lightdm). And they exist in the system, they have entries in /etc/passwd & /etc/shadow and I (partly) restored their /home I've got no specific user config under /etc/lightdm ; file /etc/lightdm/users.conf mentions # NOTE: If you have AccountsService installed on your system, then LightDM # will use this instead and these settings will be ignored but I have no idea of how to deal with AccountsService thru the command line As you probably guessed, I really dislike doing administrative tasks thru a graphical interface; I much prefer the command line What did I do wrong? How can a user entry not appear in lightdm graphical login? (I need to have my wife's user entry apparent for graphical login). I am not asking how to hide a user, but how to show it in lightdm graphical prompt work-around As I have been told in comments by Nirmik and by Enzotib, lightdm probably don't show any users of uid less than 1024. So I changed all the uid to be more than 8200 (including on the Debian NFS server) and this made all the users visible at the graphical prompt. It is a pain that such a threshold is not really documented.

    Read the article

  • Term for unit testing that separates test logic from test result data

    - by mario
    So I'm not doing any unit testing. But I've had an idea to make it more appropriate for my field of use. Yet it's not clear if something like this exists, and if, how it would possibly be called. Ordinary unit tests combine the test logic and the expected outcome. In essence the testing framework only checks for booleans (did this match, did the expected result result). To generalize, the test code itself references the audited functions, and also explicites the result values like so: unit::assert( test_me() == 17 ) What I'm looking for is a separation of concerns. The test itself should only contain the tested logic. The outcome and result data should be handled by the unit testing or assertion framework. As example: unit::probe( test_me() ) Here the probe actually doubles as collector in the first run, and afterwards as verification method. The expected 17 is not mentioned in the test code, but stored or managed elsewhere. How is this scheme called? Or how would you call it? I hope I can find some actual implementations with the proper terminology. Obviously such a pattern is unfit for TDD. It's strictly for regression testing. Also obviously, it cannot be used for all cases. Only the simpler test subjects can be analyzed that way, for anything else the ordinary unit test setup and assertion steps are required. And yes, this could be manually accomplished by crafting a ResultWhateverObject, but that would still require hardwiring that to the test logic. Also keep in mind that I'm inquiring for use with scripting languages, and not about Java. I'm aware that the xUnit pattern originates there, and why it's hence as elaborate as it is. Btw, I've discovered one test execution framework which allows for shortening simple test notations to: test_me(); // 17 While thus the result data is no longer coded in (it's a comment), that's still not a complete separation and of course would work only for scalar results.

    Read the article

  • BDD/TDD vs JAD?

    - by Jonathan Conway
    I've been proposing that my workplace implement Behavior-Driven-Development, by writing high-level specifications in a scenario format, and in such a way that one could imagine writing a test for it. I do know that working against testable specifications tends to increase developer productivity. And I can already think of several examples where this would be the case on our own project. However it's difficult to demonstrate the value of this to the business. This is because we already have a Joint Application Development (JAD) process in place, in which developers, management, user-experience and testers all get together to agree on a common set of requirements. So, they ask, why should developers work against the test-cases created by testers? These are for verification and are based on the higher-level specs created by the UX team, which the developers currently work off. This, they say, is sufficient for developers and there's no need to change how the specs are written. They seem to have a point. What is the actual benefit of BDD/TDD, if you already have a test-team who's test cases are fully compatible with the higher-level specs currently given to the developers?

    Read the article

  • Refactoring this code that produces a reverse-lookup hash from another hash

    - by Frank Joseph Mattia
    This code is based on the idea of a Form Object http://blog.codeclimate.com/blog/2012/10/17/7-ways-to-decompose-fat-activerecord-models/ (see #3 if unfamiliar with the concept). My actual code in question may be found here: https://gist.github.com/frankjmattia/82a9945f30bde29eba88 The code takes a hash of objects/attributes and creates a reverse lookup hash to keep track of their delegations to do this. delegate :first_name, :email, to: :user, prefix: true But I am manually creating the delegations from a hash like this: DELEGATIONS = { user: [ :first_name, :email ] } At runtime when I want to look up the translated attribute names for the objects, all I have to go on are the delegated/prefixed (have to use a prefix to avoid naming collisions) attribute names like :user_first_name which aren't in sync with the rails i18n way of doing it: en: activerecord: attributes: user: email: 'Email Address' The code I have take the above delegations hash and turns it into a lookup table so when I override human_attribute_name I can get back the original attribute name and its class. Then I send #human_attribute_name to the original class with the original attribute name as its argument. The code I've come up with works but it is ugly to say the least. I've never really used #inject so this was a crash course for me and am quite unsure if this code effective way of solving my problem. Could someone recommend a simpler solution that does not require a reverse lookup table or does that seem like the right way to go? Thanks, - FJM

    Read the article

  • Page Inspector and Visual Studio 2012

    - by nikolaosk
    In this post I will be looking into a new feature that has been added to theVS 2012 IDE. I am talking about Page Inspector that gives developers a very good/handy way to identify layout issues and to find out which part of server side code is responsible for that HTML snippet of code.If you are interested in reading other posts about VS 2012 and .Net 4.5 please have a look here and here.This tool is integrated into the VS 2012 IDE.We can launch it in different ways. 1) We will create a new ASP.Net MVC application (an Internet application). I will not add any code.2) In the Solution Explorer I choose my project and right-click. From the available options select View in Page InspectorHave a look at the picture below.  3) We can launch Page Inspector from the Standard Toolbar. Please have a look at the picture below.   4) Let's view our application with Page Inspector. First I inspect the About link-menu.I can see very quickly the HTML that is rendering for that link to appear. I also see the server side code (the actual view, _Layout.cshtml) that is responsible for that link. This is something developers always craved for. We can also see the CSS styles that are used to style this link (About).Have a look at the picture below Obviously there are similar tools that I have been using in the past when I wanted to change a part of the HTML or see what piece of CSS code affects my layout. I used Firebug when viewing my web applications in the Firefox browser. Internet Explorer and Chrome have also great similar tools that help web developers to identify issues with a site's appearance/issues.Please bear in mind that Page Inspector works with all forms of the ASP.Net stack e.g Web Forms,Web Pages.Hope it helps!!!!!

    Read the article

  • Handling Indirection and keeping layers of method calls, objects, and even xml files straight

    - by Cervo
    How do you keep everything straight as you trace deeply into a piece of software through multiple method calls, object constructors, object factories, and even spring wiring. I find that 4 or 5 method calls are easy to keep in my head, but once you are going to 8 or 9 calls deep it gets hard to keep track of everything. Are there strategies for keeping everything straight? In particular, I might be looking for how to do task x, but then as I trace down (or up) I lose track of that goal, or I find multiple layers need changes, but then I lose track of which changes as I trace all the way down. Or I have tentative plans that I find out are not valid but then during the tracing I forget that the plan is invalid and try to consider the same plan all over again killing time.... Is there software that might be able to help out? grep and even eclipse can help me to do the actual tracing from a call to the definition but I'm more worried about keeping track of everything including the de-facto plan for what has to change (which might vary as you go down/up and realize the prior plan was poor). In the past I have dealt with a few big methods that you trace and pretty much can figure out what is going on within a few calls. But now there are dozens of really tiny methods, many just a single call to another method/constructor and it is hard to keep track of them all.

    Read the article

  • Website restyle, SEO migration plan?

    - by Goboozo
    I am currently in a project for one of my biggest clients. We have built a website that will -replace- the old website. When it comes to actual content its is largely the same. However, the presentation of the content has changed drastically. From our point of view much more user-friendly (main reason to update the site). Now, since the sites presentation has changed we have some major changes in: HTML & CSS: To change the presentation of the content URL's: To make them better understandable (301 redirects have been taken care of and are in place) Breadcrumbs: To enhance the navigation (we have made the breadcrumbs match exactly with the url's) Pagination: This was added to enable content browsing Title tags: Added descriptive title tags to the major links and buttons. Basically all user content including meta tags have remained the same. Now since this company is rather successful and 90% of its clients come from Google's organic results I am obliged to take all necessary precautions. People tell me I need a migration plan to prevent the site being hurt in Google, but I have never worked using such a plan... ...So, based on the above. Would you consider a migration plan necessary and what precautions/actions would you recommend to prevent us being put down in our SERP positions? Many thanks in advance for your answers.

    Read the article

  • does my js replace view?

    - by Milla Well
    I am writing a web application which is based on Codeigniter and jQuery. I primarily use ajax to call my controller functions and it turned out, that there are just 4 view*.php files, because most of my contoller functions return JSON data, which is processed in my jQuery. So my actual code is divided in kind of MVCC model: Codeigniter model (db, computations) Codeigniter controller (filtering, xss-cleaning, checking permissions, call model functions) jQuery controller (callback functions) jQuery view (adding/removing classes, appending elements,... ) So I violate the paradigm of not using the echo function in my Codeiginter controller and simply call echo json_encode($result); because it doesn't make any sense to me to create a view*.php file for one loc. Especially because all the regular view*.php stuff is covered in my jQuery view. I was wondering if I am missing something out, or if there is a way to integrate this jQuery-controller in my Codeigniter. I found some words on this topic, but this seems pretty handmade. Are there some neat solutions? Does a MVCC model make sense?

    Read the article

  • SQL SERVER – What is the Maximum Relational Database Size Supported by Single Instance?

    - by Pinal Dave
    I often get asked following question? “How much data SQL Server can handle?” Every single time when I get this question – I ask back following question - “How much data your storage system can handle?” The reason I ask this question back is because in reality for enterprise systems the limitation of storage is no more an issue. The Matter of the fact most of the database is now a days limited by the size of the storage system. SQL Server is enterprise system and it is very mature product. Even though if you still want to know what is the actual limit here is the answer. SQL Server 2008R2, 2012 and 2014 have maximum capacity of 524 PB (Petabyte) in the Enterprise, BI and Standard edition. SQL Server Express has a limitation of 10 GB due to its nature. I guess, now when you look at my question it will make sense that it is all depending on the size of your storage system. I personally believe at this point of time 524 PB is quite a huge data, but we never know after 10 years when we read this blog post, we all may think what was I thinking actually. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Reading source code to learn

    - by perl.j
    As you develop as a programmer, IMO, you begin to see different practices, different Algorithms, and "more than one way to do it". Seeing this code can be a great learning experience for you, even though you did not write the code. But is doing this only going to confuse you? For example, let's say you have a library in any language that was created by a colleague, and you have been using it for a while. You decide to look at the actual source code, regardless of how extensive it is, and get a better look at how this library is written. For the sake of example, the function you use most often from this library is the max function, which finds the largest of two numbers. But this function is a lot more complicated than it needs to be. The way it is written is confusing the heck out of you, and you don't know how this works. Will this make you a better programmer, because you realize how complicated it is for such a simple function, or will it make you a worse coder because you feel less confidant? So my question, in general, is does reading source code make you a better programmer and if so how? If not why do people still do it?.

    Read the article

  • How do you stay productive when dealing with extremely badly written code?

    - by gaearon
    I don't have much experience in working in software industry, being self-taught and having participated in open source before deciding to take a job. Now that I work for money, I also have to deal with some unpleasant stuff, which is normal of course. Recently I was assigned to add logging to a large SharePoint project which is written by some programmer who obviously was learning to code on the job. After 2 years of collaboration, the client switched to our company, but the damage was done, and now somehow I need to maintain this code. Not that the code was too hard to read. Despite problems - each project has one class with several copy-pasted methods, enormous if nestings, Systems Hungarian, undisposed connections — it's still readable. However, I found myself absolutely unproductive despite working on something as simple as adding logging. Basically, I just need to go through the code step by step and add some trace calls. However, the idiocy of the code is so annoying that I get tired within 10 minutes of starting. In the beginning, I used to add using constructs, reduce nesting by reversing if's, rename the variables to readable names—but the project is large, and eventually I gave up. I know this is not the task I should be doing, but at least reducing the mess gave me some kind of psychological reward so I could keep going. Now the trick stopped working, and I still have 60% of my work to do. I started having headaches after work, and I no longer get the feeling of satisfaction I used to get - which would usually allow me to code for 10 hours straight and still feel fresh. This is not just one big rant, for I really do have an actual question: Is there a way to stay productive and not to fight the windmills? Is there some kind of psychological trick to stay focused on the task, instead of thinking “How stupid is that?” each time I see another clever trick by the previous programmer? The problem with adding logging is that I actually have to understand what the code does, and doing so hurts my brain in an unpleasant fashion.

    Read the article

  • How can I find the shortest path between two subgraphs of a larger graph?

    - by Pops
    I'm working with a weighted, undirected multigraph (loops not permitted; most node connections have multiplicity 1; a few node connections have multiplicity 2). I need to find the shortest path between two subgraphs of this graph that do not overlap with each other. There are no other restrictions on which nodes should be used as start/end points. Edges can be selectively removed from the graph at certain times (as explained in my previous question) so it's possible that for two given subgraphs, there might not be any way to connect them. I'm pretty sure I've heard of an algorithm for this before, but I can't remember what it's called, and my Google searches for strings like "shortest path between subgraphs" haven't helped. Can someone suggest a more efficient way to do this than comparing shortest paths between all nodes in one subgraph with all nodes in the other subgraph? Or at least tell me the name of the algorithm so I can look it up myself? For example, if I have the graph below, the nodes circled in red might be one subgraph and the nodes circled in blue might be another. The edges would all have positive integer weights, although they're not shown in the image. I'd want to find whatever path has the shortest total cost as long as it starts at a red node and ends at a blue node. I believe this means the specific node positions and edge weights cannot be ignored. (This is just an example graph I grabbed off Wikimedia and drew on, not my actual problem.)

    Read the article

  • What should every programmer know about web development?

    - by Joel Coehoorn
    What things should a programmer implementing the technical details of a web application before making the site public? If Jeff Atwood can forget about HttpOnly cookies, sitemaps, and cross-site request forgeries all in the same site, what important thing could I be forgetting as well? I'm thinking about this from a web developer's perspective, such that someone else is creating the actual design and content for the site. So while usability and content may be more important than the platform, you the programmer have little say in that. What you do need to worry about is that your implementation of the platform is stable, performs well, is secure, and meets any other business goals (like not cost too much, take too long to build, and rank as well with Google as the content supports). Think of this from the perspective of a developer who's done some work for intranet-type applications in a fairly trusted environment, and is about to have his first shot and putting out a potentially popular site for the entire big bad world wide web. Also, I'm looking for something more specific than just a vague "web standards" response. I mean, HTML, JavaScript, and CSS over HTTP are pretty much a given, especially when I've already specified that you're a professional web developer. So going beyond that, Which standards? In what circumstances, and why? Provide a link to the standard's specification.

    Read the article

  • Is there a design pattern for chained observers?

    - by sharakan
    Several times, I've found myself in a situation where I want to add functionality to an existing Observer-Observable relationship. For example, let's say I have an Observable class called PriceFeed, instances of which are created by a variety of PriceSources. Observers on this are notified whenever the underlying PriceSource updates the PriceFeed with a new price. Now I want to add a feature that allows a (temporary) override to be set on the PriceFeed. The PriceSource should still update prices on the PriceFeed, but for as long as the override is set, whenever a consumer asks PriceFeed for it's current value, it should get the override. The way I did this was to introduce a new OverrideablePriceFeed that is itself both an Observer and an Observable, and that decorates the actual PriceFeed. It's implementation of .getPrice() is straight from Chain of Responsibility, but how about the handling of Observable events? When an override is set or cleared, it should issue it's own event to Observers, as well as forwarding events from the underlying PriceFeed. I think of this as some kind of a chained observer, and was curious if there's a more definitive description of a similar pattern.

    Read the article

  • What deployment framework to use?

    - by jeruki
    We are trying to figure out what deployment method/framework to use with a python application, it has a basic wsgi server to make some REST resources available and a set of static web pages with the interface that are served through apache. The situation is as follows: My team works in isolated parts of the program and sometimes together in specific modules, we have different testing servers and one master server, we all work locally, sync the code using git, and then run a bash script that copies the files from the windows machines to the indicated linux server(using ssh) and then restarts the app. After thinking about it this doesn't seem to be the right way to do it, the script overwrites all the files in the server with the local files everytime. We want to be able to work in the same server without the worry of overwriting other people's code and we need to deploy to different servers to avoid restarting the service while others work with it and in the near future we need to deploy to the master or several clones of the master server when the application reaches a more mature state. We found serveral options capistrano, kwate, chef or fortress, even fleet but we wanted to have opinions from people that has used them to be sure it is what we need. So this are the main questions: Are these the kind of programs we should be looking at to achive a safe concurrent deployment process? Which one have you used/recommend and why? do you think it would help in our actual situation? Thank you so much for your feedback and advice on this.

    Read the article

  • Early Z culling - Ogre

    - by teodron
    This question is concerned with how one can enable this "pixel filter" to work within an Ogre based app. Simply put, one can write two passes, the first without writing any colour values to the frame buffer lighting off colour_write off shading flat The second pass is the one that employs heavy pixel shader computations, hence it would be really nice to get rid of those hidden surface patches and not process them pixel-wise. This approach works, except for one thing: objects with alpha, such as billboard trees suffer in a peculiar way - from one side, they seem to capture the sky/background within their alpha region and ignore other trees/houses behind them, while viewed from the other side, they exhibit the desired behavior. To tackle the issue, I thought I could write a custom vertex shader in the first pass and offset the projected Z component of the vertex a little further away from its actual position, so that in the second pass there is a need to recompute correctly the pixels of the objects closest to the camera. This doesn't work at all, all surfaces are processed in the pixel shader and there is no performance gain. So, if anyone has done a similar trick with Ogre and alpha objects, kindly please help.

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >