Search Results

Search found 6605 results on 265 pages for 'complex networks'.

Page 214/265 | < Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >

  • Handling conflicting priorities and expectations in project development

    - by jasonk
    There are any number of situations in the standard day where priority conflicts exist for projects. Management wants maximum productivity from employees. Marketing wants maximum salability and fast turnaround. Ownership wants maximum profit. Customers want usability and low cost. Regardless of the origin of the demands, time and money are always the limiting factor in business. Sometimes project elements have intrinsic or goodwill benefits for which there is not a hard, fast way to measure with monetary means (e.g. arguments for an attractive UI appeal vs. functional but plain). Other elements of software may have a method of providing “mental breaks” or motivating “cool factor” for developers that can get them back on track on other bigger, complex issues. While they may sidetrack the project short term they may have greater results long term through improved job satisfaction, etc. Continued training is a must, but working it in can setup back progress. What are your suggestions for setting priorities? How do you evaluate requests/demands on your projects? What are your suggestions for communicating and passing those on to your team in a way that they stay focused?

    Read the article

  • How best to use XPath with very large XML files in .NET?

    - by glenatron
    I need to do some processing on fairly large XML files ( large here being potentially upwards of a gigabyte ) in C# including performing some complex xpath queries. The problem I have is that the standard way I would normally do this through the System.XML libraries likes to load the whole file into memory before it does anything with it, which can cause memory problems with files of this size. I don't need to be updating the files at all just reading them and querying the data contained in them. Some of the XPath queries are quite involved and go across several levels of parent-child type relationship - I'm not sure whether this will affect the ability to use a stream reader rather than loading the data into memory as a block. One way I can see of making it work is to perform the simple analysis using a stream-based approach and perhaps wrapping the XPath statements into XSLT transformations that I could run across the files afterward, although it seems a little convoluted. Alternately I know that there are some elements that the XPath queries will not run across, so I guess I could break the document up into a series of smaller fragments based on it's original tree structure, which could perhaps be small enough to process in memory without causing too much havoc. I've tried to explain my objective here so if I'm barking up totally the wrong tree in terms of general approach I'm sure you folks can set me right...

    Read the article

  • NullPointerException in javax.swing.text.SimpleAttributeSet.addAttribute

    - by Paul Reiners
    Has anyone ever seen an exception like this?: ERROR: java.lang.NullPointerException: null at java.util.Hashtable.put(null:-1) at javax.swing.text.SimpleAttributeSet.addAttribute(null:-1) at javax.swing.text.SimpleAttributeSet.addAttributes(null:-1) at javax.swing.text.StyledEditorKit.createInputAttributes(null:-1) at javax.swing.text.StyledEditorKit$AttributeTracker.updateInputAttributes(null:-1) at javax.swing.text.StyledEditorKit$AttributeTracker.caretUpdate(null:-1) at javax.swing.text.JTextComponent.fireCaretUpdate(null:-1) at javax.swing.text.JTextComponent$MutableCaretEvent.fire(null:-1) at javax.swing.text.JTextComponent$MutableCaretEvent.mouseReleased(null:-1) at java.awt.AWTEventMulticaster.mouseReleased(null:-1) at java.awt.AWTEventMulticaster.mouseReleased(null:-1) at java.awt.Component.processMouseEvent(null:-1) at javax.swing.JComponent.processMouseEvent(null:-1) at java.awt.Component.processEvent(null:-1) at java.awt.Container.processEvent(null:-1) at java.awt.Component.dispatchEventImpl(null:-1) at java.awt.Container.dispatchEventImpl(null:-1) at java.awt.Component.dispatchEvent(null:-1) at java.awt.LightweightDispatcher.retargetMouseEvent(null:-1) at java.awt.LightweightDispatcher.processMouseEvent(null:-1) at java.awt.LightweightDispatcher.dispatchEvent(null:-1) at java.awt.Container.dispatchEventImpl(null:-1) at java.awt.Window.dispatchEventImpl(null:-1) at java.awt.Component.dispatchEvent(null:-1) at java.awt.EventQueue.dispatchEvent(null:-1) at java.awt.EventDispatchThread.pumpOneEventForFilters(null:-1) at java.awt.EventDispatchThread.pumpEventsForFilter(null:-1) at java.awt.EventDispatchThread.pumpEventsForHierarchy(null:-1) at java.awt.EventDispatchThread.pumpEvents(null:-1) at java.awt.EventDispatchThread.pumpEvents(null:-1) at java.awt.EventDispatchThread.run(null:-1) I wish I could tell you an easy way to reproduce this, but I can’t. It’s happening in a Java Swing application I maintain. It happens infrequently and the application is quite complex. I know it’s a bit of a long shot just showing this stack trace, but I thought I’d try.

    Read the article

  • Help understanding the Single Responsibility Principle

    - by user204588
    I'm trying to understand what a responsibility actually is so I want to use an example of something I'm currently working on. I have a app that imports product information from one system to another system. The user of the apps gets to choose various settings for which product fields in one system that want to use in the other system. So I have a class, say ProductImporter and it's responsibility is to import products. This class is large, probably too large. The methods in this class are complex and would be for example, getDescription. This method doesn't simply grab a description from the other system but sets a product description based on various settings set by the user. If I were to add a setting and a new way to get a description, this class could change. So, is that two responsibilities? Is there one that imports products and one that gets a description. It would seem this way, almost every method I have would be in it's own class and that seems like overkill. I really need a good description of this principle because it's hard for me to completely understand. I don't want needless complexity.

    Read the article

  • How to use jQuery .live() with ajax

    - by kylemac
    Currently I am using John Resig's LiveQuery plugin/function - http://ejohn.org/blog/jquery-livesearch/ - to allow users to sort through a long unordered-list of list-items. The code is as follows: $('input#q').liveUpdate('ul#teams').focus(); The issue arises when I use ajaxified tabs to sort the lists. Essentially I use ajax to pull in different lists and the liveUpdate() function doesn't have access to the new li's. I assume I would need to bind this using the .live() function - http://api.jquery.com/live/. But I am unclear how to bind this to an ajax event, I've only used the "click" event. How would I bind the new liveUpdate() to the newly loaded list-items? EDIT: The ajax tabs is run through the wordpress ajax api so the code is fairly complex, but simplified it is something like this: $('div.item-list-tabs').click( function(event) { var target = $(event.target).parent(); var data = {action, scope, pagination}; // Passes action to WP that loads my tab data $.post( ajaxurl, data, function(response) { $(target).fadeOut( 100, function() { $(this).html(response); $(this).fadeIn(100); }); }); return false; }); This is simplified for the sake of this conversation, but basically once the $.post loads the response in place .liveUpdate() doesn't have access to it. I believe the .live() function is the answer to this problem, I'm just unclear on how to implement it with the $.post()

    Read the article

  • Algorithm to determine which points should be visible on a map based on zoom

    - by lgratian
    Hi! I'm making a Google Maps-like application for a course at my Uni (not something complex, it should load the map of a city for example, not the whole world). The map can have many layers, including markers (restaurants, hospitals, etc.) The problem is that when you have many points and you zoom out the map it doesn't look right. At this zoom level only some points need to be visible (and at the maximum map size, all points). The question is: how can you determine which points should be visible for a specified zoom level? Because I have implemented a PR Quadtree to speed up rendering I thought that I could define some "high-priority" markers (that are always visible, defined in the map editor) and put them in a queue. At each step a marker is removed from the queue and all it's neighbors that are at least D units away (D depends on the zoom levels) are chosen and inserted in the queue, and so on. Is there any better way than the algorithm I thought of? Thanks in advance!

    Read the article

  • How Does MVC Handle Missing Data Requirements

    - by Don Bakke
    I'm teaching myself MVC concepts in hopes of applying them to a non-OO/procedural development environment. I am pretty sure I understand simple View - Request - Controller - Request - Model - Response - Controller - Response - View flow. What I am struggling with is understanding more complex scenarios. For instance, let's say I have a shopping cart form with a button for 'Calculate Shipping'. Normally a click on this button will follow the above flow. But what if there is missing data, like the zip code? Should the View verify this first and alert the user before making a 'Calculate Shipping' request? Or should the request be made and the Model returns a notification that critical data is missing? If the latter, does the Controller instruct the View to alert the user? What if I wanted to prompt the user for the missing zip code (perhaps in a popup input display) and then automatically request the 'Calculate Shipping' method again? I suppose this gets into the question of how smart a View ought to be. It seems that MVC has evolved due to richer UI and automation (such as with data-binding) and this muddies the water from a purist MVC perspective. Any thoughts are greatly appreciated.

    Read the article

  • Using Git to work with subversion: Ignoring modifications to tracked files

    - by Chris Nicola
    I am currently working with a subversion repository but I am using git to work locally on my machine. It makes work much easier, but it also makes some of the bad behavior going on in the subversion repo quite glaring and that creates problems for me. There is a somewhat complex local build process after pulling down the code and it creates (and unfortunately modifies) a number of files. Obviously these changes are not meant to be committed back to the repository. Unfortunately the build process is actually modifying some tracked files (yes, most likely because someone mistakenly committed these build artifacts at some point to the subversion repository). Since these are modifications adding them to my ignore file does nothing for me. I can avoid checking these changes back it, I simple don't stage or commit them, but having unstaged local changes means I can't rebase without first cleaning them up. What I would like to know is if there any way to ignore future changes to a set of tracked files? Alternatively, is there another way to handle the problem I am having, or will I just have to tell whoever checked in these files to clean them up?

    Read the article

  • C#JSON serialization

    - by Bridget the Midget
    I'm trying out the HighStock library for creating stock charts. To fill the chart with data, their example specifies this source. The first parameter is unixtime in milliseconds and the second parameter is the stock closing price. I don't know if this is valid json, but I would argue that the following would be a more appropriate way of writing json. [{"Closing":63.15000,"Date":1262559600000},{"Closing":64.75000,"Date":1262646000000}, ... I guess that I have no other option than to adapt to HighStocks syntax. I could solve this by looping and add correct syntax to a string, but that seems rudimentary. Would it be more wise to serialize C# objects to create my json, and if that's the case - how can I reach the syntax specified in the example? Lets just say this is my c# object: public class Quote { public double Date { get; set; } public decimal Closing { get; set; } } Am I making it unnecessary complex? Should I just format a json string?

    Read the article

  • Setting EditText imeOptions to actionNext has no effect

    - by Katedral Pillon
    I have a fairly complex (not really) xml layout file. One of the views is a LinearLayout (v1) with two children: an EditText(v2) and another LinearLayout(v3). The child LinearLayout in turn has an EditText(v4) and an ImageView(v5). For EditText v2 I have imeOptions as android:imeOptions="actionNext" But when I run the app, the keyboard's return does not check to next and I want it to change to next. How do I fix this problem? Also, when user clicks next, I want focus to go to EditText v4. I do I do this? For those who really need to see some code: <LinearLayout android:id="@+id/do_txt_view" android:layout_width="match_parent" android:layout_height="wrap_content" android:background="@color/col6" android:orientation="vertical" android:visibility="gone" > <EditText android:id="@+id/gm_title" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_margin="5dp" android:background="@drawable/coldo_text" android:hint="@string/enter_title" android:maxLines="1" android:imeOptions="actionNext" android:padding="5dp" android:textColor="pigc7" android:textSize="ads2" /> <LinearLayout android:layout_width="match_parent" android:layout_height="100dp" android:orientation="horizontal" > <EditText android:id="@+id/rev_text" android:layout_width="0dp" android:layout_height="match_parent" android:layout_gravity="center_vertical" android:layout_margin="5dp" android:layout_weight="1" android:background="@drawable/coldo_text" android:hint="@string/enter_msg" android:maxLines="2" android:padding="5dp" android:textColor="pigc7" android:textSize="ads2" /> <ImageView android:layout_width="wrap_content" android:layout_height="match_parent" android:layout_gravity="center_vertical" android:background="@drawable/colbtn_r” android:clickable="true" android:onClick=“clickAct” android:paddingLeft="5dp" android:paddingRight="5dp" android:src="@drawable/abcat” /> </LinearLayout> </LinearLayout>

    Read the article

  • How much business logic belongs in RIA services layer?

    - by jkohlhepp
    I have been experimenting recently with Silverlight, RIA Services, and Entity Framework using .NET 4.0. I'm trying to figure out if that stack makes sense for use in any of my upcoming projects. It certainly seems like these technologies can be very productive for developing applications, but I'm struggling to decide how an application on top of this stack should be architected. The main issue I have is that in most of the demos I've seen most of the business logic ends up as DataAnnotations and custom validations in the RIA Services domain service class. This seems inappropriate to me. I view the domain service as basically a glorified web service that happens to make it easy to push information to the client. But most of what I've seen seems to orient the domain service as the main source of business logic in the application. So, my questions: What is the best location for business logic (rules, validations, behaviors, authorization) in an application using this stack? Are there any guidelines published at an architectural level for using this stack? My questions pertain to large, complex, and long-lived applications. Obviously for an application of only a few screens this is less of a concern. Edit: Another thing I meant to mention is that obviously you can make the domain service class stupid, but then you lose a lot of the automagic entity information (e.g. validations) being pushed to the client. And then if you lose that is there any point to using RIA services?

    Read the article

  • CLR Stored Procedures

    - by Paul Hatcherian
    In an ASP.NET application, I have a small number of fairly complex, frequently used operations to execute against a database. In these operations, one or more of several tables needs updates or inserts based a logical evaluation of both input parameters and values of certain tables. I've maintained a separation of logic and data access, so the operation currently looks like this: Request received from client Business layer invokes data layer to retrieve data from database Business layer processes result and determines which operation to execute Business layer invokes appropriate data operation Response sent to client As you can see, the client is kept waiting while two separate requests are made to the database. In searching for a solution to this, I've found CLR Stored Procedures, but I'm not sure if I have the right idea about what they are useful for. I have written a replacement for the code above which especially places steps 2-4 in a CLR SP. My understanding is that the SP will be executed locally by SQL Server and result in only one call being made to the server. My initial benchmark tests show this is actually orders of magnitude slower than my original code, but I attribute that recompilation of the code I have not worked out yet and/or some flaw in my environment. My question is basically, is this the intended use of CLR SPs or am I missing something? I realize this is a bit of a compromise structurally, so if there's a better way to do it I'd love to hear it.

    Read the article

  • Can someone look over the curriculum for this major & give me your thoughts? Computing & Security Te

    - by scottsharpejr
    My goal is to become a good web developer. I'm interested in learning how to build complex websites as well as how to write web applications. I want skills that will enable me to write apps for <--insert hottest web trend here-- (Facebook & iphone apps for example) This is one of my goals as far as Tech. is concerned. I'd also like to have a brod knowledge of different areas of IT. I'm looking into majoring in "Computing & Security Technology". The program is offered by Drexel in conjunction with my CC. It's a 4 year degree. Can someone take a look @ the pdf below. It outlines every course I must take. http://www.drexelatbcc.org/academics/PDF/CST_CT.pdf For degree requirments w/ links to course descriptiongs see drexel.edu/catalog/degree/ct.htm With electives I can go up to Web Development 4. Based on my goals of Web development & wanting a well rounding education in information technology, what do you think of the curriculum? How will I fare entering the job market with this degree? My goals here are a little different. I'd like to work for 2 to 3 companies over the course of 6-7 years. Working with and learning different areas of IT. I'd like to stay with a company an average of 2-3 years before moving on. My end goal is to go into business for myself (IT related). I appreciate any and all advice the community here can give me! :) Could someone also explain to me their interpretation of this major? thanks! P.S. I already know XHTML & CSS. I am just now starting to experiment with PHP.

    Read the article

  • rapid application developement tools for very basic GUI apps

    - by Jurij
    I know there are many RAD platforms out there. Infact there are so many that I'm having a hard time finding out which one fits me best. What I want is a RAD tool that would allow me to define a database data model (make DB tables) and then create (view and edit) forms for the various tables. Data input, updating and various queries should be easy and GUI should generate automatically. I'd like to add some additional functionality by coding (such as various complex calculations on the data). I'm a programmer so I'm willing to learn to use a more complete, full-blown RAD solution if you can point me to it (NetBeans and RubyOnRails being the two such frameworks that I'd would probably be high on the list). I'm currently doing Windows Forms logistics apps in .NET. I've actually developed a very crude and basic version of what I need, but I just know that there are solutions out there that are much better and I'd benefit by knowing how to use them. So in short, the basic requirements: * database based data storage (SQLite if possible) * very automated GUI creation * desktop based (as in: not a web app) * extendable by coding * used for creating simple data entry, view & query apps. So basically something like Oracle Forms or DotNetMushroom Rapid Application Developer. But for .NET and SQLite if possible.

    Read the article

  • How does a java web project architecture look like without EJB3 ?

    - by Hendrik
    A friend and I are building a fairly complex website based on java. (PHP would have been more obvious but we chose for java because the educational aspect of this project is important to us) We have already decided to use JSF (with richfaces) for the front end and JPA for the backend and so far we have decided not to use EJB3 for the business layer. The reason we've decided not to use EJB3 is because - and please correct me if I am wrong - if we use EJB3 we can only run it on a full blown java application server like jboss and if we don't use EJB3 we can still run it on a lightweight server like tomcat. We want to keep speed and cost of our future web server in mind. So far i've worked on two JEE projects and both used the full stack with web business logic factories/persistence service entities with every layer a seperate module. Now here is my question, if you dont use EJB3 in the business logic layer. What does the layer look like? Please tell what is common practice when developing java web projects without ejb3? Do you think business logic layer can be thrown out altogether and have business logic in the backing beans? If you keep the layer, do you have all business methods static? Or do you initialize each business class as needed in the backing beans in every session as needed?

    Read the article

  • .NET regex: Match.nextMatch() never returns

    - by Jimmy
    I have a regex that seems to have worked fine for the past year or so, and all of a sudden today with a new slightly different text to match against, Match.nextMatch() never returns. I'm no regex expert and I'm sure the regex can be optimized, but previous data sets weren't much more complex than what I've tried today. Furthermore, the regex works fine against the offending data set in a tool like RegexBuddy; it's only in .net (running in debug in Visual Studio) that it seems to hang. Nevertheless, if anyone can figure out how to tweak the regex to make it work, I'd really appreciate it. This is the regex: <tr>(<td[^>]*><a[^>]*>(?<callOptionTicker>[A-Z]{1,5}\d{6}C\d{8})</a></td>)(<td[^>]*>.*?</td>){6}(<td[^>]*><b><a[^>]*>(?<strikePrice>\d*\.\d*)</a></b></td>)(<td[^>]*><a[^>]*>(?<putOptionTicker>[A-Z]{1,5}\d{6}P\d{8})</a></td>) It's meant to extract put and call option tickers from a Yahoo option chain page (i.e., raw HTML). It works fine for IBM http://finance.yahoo.com/q/os?s=IBM&m=2010-05-21 It doesn't work for SPX options (this is the offending data set) http://finance.yahoo.com/q/os?s=I:SPX.W&m=2010-05

    Read the article

  • Counter that will remember it's value

    - by owca
    I have a task to operate on complex number. Each number consists of double r = real part, double i = imaginary part and String name. Name must be set within constructor, so I've created int counter, then I'm sending it's value to setNextName function and get name letter back. Unfortunately incrementing this 'counter' value works only within costructor and then it is once again set to 0. How to deal with that?Some constant value? And second problem is that I also need to provide setNextNames(char c) function that will change the counter current value. The code : public class Imaginary { private double re; private double im; private String real; private String imaginary; private String name; private int counter=0; public Imaginary(double r, double u){ re = r; im = u; name = this.setNextName(counter); counter++; } public static String setNextName(int c){ String nameTab[] = {"A","B","C","D","E","F","G","H","I","J","K","L","M","N", "O","P","Q","R","S","T","U","W","V","X","Y","Z"}; String setName = nameTab[c]; System.out.println("c: "+c); return setName; } public static String setNextName(char c){ // //don't know how to deal with this part // }

    Read the article

  • When virtual inheritance IS a good design?

    - by 7vies
    EDIT3: Please be sure to clearly understand what I am asking before answering (there are EDIT2 and lots of comments around). There are (or were) many answers which clearly show misunderstanding of the question (I know that's also my fault, sorry for that) Hi, I've looked over the questions on virtual inheritance (class B: public virtual A {...}) in C++, but did not find an answer to my question. I know that there are some issues with virtual inheritance, but what I'd like to know is in which cases virtual inheritance would be considered a good design. I saw people mentioning interfaces like IUnknown or ISerializable, and also that iostream design is based on virtual inheritance. Would those be good examples of a good use of virtual inheritance, is that just because there is no better alternative, or because virtual inheritance is the proper design in this case? Thanks. EDIT: To clarify, I'm asking about real-life examples, please don't give abstract ones. I know what virtual inheritance is and which inheritance pattern requires it, what I want to know is when it is the good way to do things and not just a consequence of complex inheritance. EDIT2: In other words, I want to know when the diamond hierarchy (which is the reason for virtual inheritance) is a good design

    Read the article

  • How to arrange business logic in a Kohana 3 project

    - by Pekka
    I'm looking for advice, tutorials and links at how to set up a mid-sized web application with Kohana 3. I have implemented MVC patterns in the past but never worked against a "formalized" MVC framework so I'm still getting my head around the terminology - toying around with basic examples, building views and templates, and so on. I'm progressing fairly well but I want to set up a real-world web project (one of my own that I've been planning for quite some time now) as a learning object. I learn best by example, but example-based documentation is a bit sparse for Kohana 3 right now - they say so themselves on the site. While I'm not worried about learning the framework as I go along, I want to make sure the code base is healthily structured from the start - i.e. controllers are split nicely, named well and according to standards, and most importantly the business logic is separated into appropriately sized models. My application could, in its core, be described as a business directory with a range of search and listing functions, and a login area for each entry owner. The actual administrative database backend is already taken care of. Supposing I have all the API worked out and in place already - list all businesses, edit business, list businesses by street name, create offer logged in as business, and so on, and I'm just looking for how to fit the functionality into a MVC pattern and into a Kohana application structure that can be easily extended. Do you know real-life examples of "database-heavy" applications like directories, online communities... with a log-in area built on Kohana 3, preferably Open Source so I could take a peek how they do it? Are there conventions or best practices on how to structure an extendable login area for end users in a Kohana project that is not only able to handle a business directory page, but further products on separate pages as well? Do you know any good resources on building complex applications with Kohana? Have you built something similar and could give me recommendations on a project structure?

    Read the article

  • Creating and publishing exel file in MOSS 2007 using data from SQL sever.

    - by Diomos
    Hello, I need help in this matter: We have a template of exel file in which all calculations are already set. User can request a 'report'. Idea is to create a button on our site (SharePoint portal). After clicking on it a new exel file is generated. This means to get actual data from database (SQL server 2005 SP2), import them into template, let all calculations to generate proper data and then allow user to see this file. For now it's enough to publish final exel file in document library. I am quite new in WSS 3.0 and MOSS 2007 and I need some advice in what can be the best solution. Looks like a quite complex task for me. Is there some direct way how to accomplish this? Or maybe I need one tool to get data from database and to import this data into exel file (SSRS?) and other tool to publish it in document library (MOSS7 Exel services?). I heard something about PerformancePoint Server 2007, is this a way to follow? Thanks forward for any advice!

    Read the article

  • Function that prints something to std::ostream and returns std::ostream?

    - by dehmann
    I want to write a function that outputs something to a ostream that's passed in, and return the stream, like this: std::ostream& MyPrint(int val, std::ostream* out) { *out << val; return *out; } int main(int argc, char** argv){ std::cout << "Value: " << MyPrint(12, &std::cout) << std::endl; return 0; } It would be convenient to print the value like this and embed the function call in the output operator chain, like I did in main(). It doesn't work, however, and prints this: $ ./a.out 12Value: 0x6013a8 The desired output would be this: Value: 12 How can I fix this? Do I have to define an operator<< instead? UPDATE: Clarified what the desired output would be. UPDATE2: Some people didn't understand why I would print a number like that, using a function instead of printing it directly. This is a simplified example, and in reality the function prints a complex object rather than an int.

    Read the article

  • Is there any way to combine transparency and "ajax usability" with HTML templates

    - by Sam
    I'm using HTML_Template_Flexy in PHP but the question should apply to any language or template library. I am outputting a list of relatively complex objects. In the beginning I just iterated over a list of the objects and called a toHtml method on them. When I was about to have my layout designer look over the template I noticed that it was too opaque and that he would have ended up looking through and/or editing many additional php source files to see what really gets generated by the toHtml method. So I extracted most of the HTML strings in the php classes up to the template which made for one clear file where you can see the whole page structure at once. However this causes problems when you want to add an object to the list using javascript. Then I have to keep the old toHtml method and maintain both the main template and the html strings at the same time, so I can output just the HTML for a new object that should be added to the page. So I'm back to the idea of using smaller templates for the objects that make up the page, but I was wondering if there was some easy way of having my cake and eating it too by having one template that shows the whole page but also the mini-templates for smaller objects on the page. Edit: Yes, updating the page is not a problem at all. My concern is with having both maintainability and transparency of the template files. If I have one single template for the whole page, then I must maintain mini-templates of the objects that are shown on the page. If I just have the mini-templates and include them from the higher-level template it becomes more difficult to look at the top-level html and imagine what the end result will look like.

    Read the article

  • Simple in-place discrete fourier transform ( DFT )

    - by Adam
    I'm writing a very simple in-place DFT. I am using the formula shown here: http://en.wikipedia.org/wiki/Discrete_Fourier_transform#Definition along with Euler's formula to avoid having to use a complex number class just for this. So far I have this: private void fft(double[] data) { double[] real = new double[256]; double[] imag = new double[256]; double pi_div_128 = -1 * Math.PI / 128; for (int k = 0; k < 256; k++) { for (int n = 0; n < 256; n++) { real[k] += data[k] * Math.Cos(pi_div_128 * k * n); imag[k] += data[k] * Math.Sin(pi_div_128 * k * n); } data[k] = Math.Sqrt(real[k] * real[k] + imag[k] * imag[k]); } } But the Math.Cos and Math.Sin terms eventually go both positive and negative, so as I'm adding those terms multiplied with data[k], they cancel out and I just get some obscenely small value. I see how it is happening, but I can't make sense of how my code is perhaps mis-representing the mathematics. Any help is appreciated. FYI, I do have to write my own, I realize I can get off-the shelf FFT's.

    Read the article

  • Tips on managing dependencies for a release?

    - by Andrew Murray
    Our system comprises many .NET websites, class libraries, and a MSSQL database. We use SVN for source control and TeamCity to automatically build to a Test server. Our team is normally working on 4 or 5 projects at a time. We try to lump many changes into a largish rollout every 2-4 weeks. My problem is with keeping track of all the dependencies for a rollout. Example: Website A cannot go live until we've rolled out Branch X of Class library B, built in turn against the Trunk of Class library C, which needs Config Updates Y and Z and Database Update D, which needs Migration Script E... It gets even more complex - like making sure each developer's project is actually compatible with the others and are building against the same versions. Yes, this is a management issue as much as a technical issue. Currently our non-optimal solution is: a whiteboard listing features that haven't gone live yet relying on our memory and intuition when planning the rollout, until we're pretty sure we've thought of everything... a dry-run on our Staging environment. It's a good indication but we're often not sure if Staging is 100% in sync with Live - part of the problem I'm hoping to solve. some amount of winging it on rollout day. So far so good, minus a few close calls. But as our system grows, I'd like a more scientific release management system allowing for more flexibility, like being able to roll out a single change or bugfix on it's own, safe in the knowledge that it won't break anything else. I'm guessing the best solution involves some sort of version numbering system, and perhaps using a project management tool. We're a start-up, so we're not too hot on religiously sticking to rigid processes, but we're happy to start, providing it doesn't add more overhead than it's worth. I'd love to hear advice from other teams who have solved this problem.

    Read the article

  • Is it immoral to write crappy code even if readability and correctness is not a requirement?

    - by mafutrct
    There are cases when crappy (i.e. unreadable and buggy) code is not much of a problem. For instance, imagine you need to generate a big text file that mostly follows a simple pattern with a few very complex exceptions. What do you do? You quickly write a simple algorithm and insert the exceptional bits in the output manually to save 4 hours. The code is unreadable, and the output is flawed, but it's still the correct way since it is way faster. But let's get this straight: I hate bad code. I've had to read and work with code that caused my stomach to hurt. I care a lot about good code. And actually, I caught myself thinking that it is immoral to write bad code even though the dirty approach is sometimes superior. I was surprised by myself and found my idea to be very irrational. Did you ever experience this? Should I just get rid of this stupid idea and use the most efficient approach to coding?

    Read the article

< Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >