Search Results

Search found 15103 results on 605 pages for 'programmers notepad'.

Page 361/605 | < Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >

  • Basic use of Business Rules

    - by shinynewbike
    I have a query on whether the following requirements would need to be designed via Business Rules - this is for a JEE based application where currently this is coded as part of the Business logic. System will create a tax account for every city, county and district combination that imposes tax for only certain cities, counties or districts depending on the taxpayer's business. When the user establishes an account which exists in all subdivisions (i.e. at city or county level), the application must use his tax code and automatically populate all the locations without requiring the user to data enter every location. I assume this would mean a data lookup table from a master table (of tax accounts) and fetch and display all locations. Is there some way in which a Rules Engine can be used to manage these combinations?

    Read the article

  • How to programmatically construct textual query

    - by stibi
    Here is a query language, more specifically, it's JQL, you can use it in Jira, to search for issues, it's something like SQL, but quite simpler. My case is that, I need to construct such queries programmatically, in my application. Something like: JQLMachine jqlMachine = new JQLMachine() jqlMachine.setStatuses("Open", "In Progress") jqlMachine.setReporter("foouser", "baruser") jqlMachine.setDateRange(...) jqlMachine.getQuery() --> String with corresponding JQL query is returned You get my point I hope. I can imagine the code for this, but it's not nice, using my current knowledge how I'd do that. That's why I'm asking. What you'd advice to use to create such thing. I believe some patterns for creating something like this already exist and there is already best practices, how to do that in good way.

    Read the article

  • Is individual code ownership important?

    - by Jim Puls
    I'm in the midst of an argument with some coworkers over whether team ownership of the entire codebase is better than individual ownership of components of it. I'm a huge proponent of assigning every member of the team a roughly equal share of the codebase. It lets people take pride in their creation, gives the bug screeners an obvious first place to assign incoming tickets, and helps to alleviate "broken window syndrome". It also concentrates knowledge of specific functionality with one (or two) team members making bug fixes much easier. Most of all, it puts the final say on major decisions with one person who has a lot of input instead of with a committee. I'm not advocating for requiring permission if somebody else wants to change your code; maybe have the code review always be to the owner, sure. Nor am I suggesting building knowledge silos: there should be nothing exclusive about this ownership. But when suggesting this to my coworkers, I got a ton of pushback, certainly much more than I expected. So I ask the community: what are your opinions on working with a team on a large codebase? Is there something I'm missing about vigilantly maintaining collective ownership?

    Read the article

  • SQL to select random mix of rows fairly [migrated]

    - by Matt Sieker
    Here's my problem: I have a set of tables in a database populated with data from a client that contains product information. In addition to the basic product information, there is also information about the manufacturer, and categories for those products (a product can be in one or more categories). These categories are then referred to as "Product Categories", and which stores these products are available at. These tables are updated once a week from a feed from the customer. Since for our purposes, some of the product categories are the same, or closely related for our purposes, there is another level of categories called "General Categories", a general category can have one or more product categories. For the scope of these tables, here's some rough numbers: Data Tables: Products: 475,000 Manufacturers: 1300 Stores: 150 General Categories: 245 Product Categories: 500 Mapping Tables: Product Category -> Product: 655,000 Stores -> Products: 50,000,000 Now, for the actual problem: As part of our software, we need to select n random products, given a store and a general category. However, we also need to ensure a good mix of manufacturers, as in some categories, a single manufacturer dominates the results, and selecting rows at random causes the results to strongly favor that manufacturer. The solution that is currently in place, works for most cases, involves selecting all of the rows that match the store and category criteria, partition them on manufacturer, and include their row number from within their partition, then select from that where the row number for that manufacturer is less than n, and use ROWCOUNT to clamp the total rows returned to n. This query looks something like this: SET ROWCOUNT 6 select p.Id, GeneralCategory_Id, Product_Id, ISNULL(m.DisplayName, m.Name) AS Vendor, MSRP, MemberPrice, FamilyImageName from (select p.Id, gc.Id GeneralCategory_Id, p.Id Product_Id, ctp.Store_id, Manufacturer_id, ROW_NUMBER() OVER (PARTITION BY Manufacturer_id ORDER BY NEWID()) AS 'VendorOrder', MSRP, MemberPrice, FamilyImageName from GeneralCategory gc inner join GeneralCategoriesToProductCategories gctpc ON gc.Id=gctpc.GeneralCategory_Id inner join ProductCategoryToProduct pctp on gctpc.ProductCategory_Id = pctp.ProductCategory_Id inner join Product p on p.Id = pctp.Product_Id inner join StoreToProduct ctp on p.Id = ctp.Product_id where gc.Id = @GeneralCategory and ctp.Store_id=@StoreId and p.Active=1 and p.MemberPrice >0) p inner join Manufacturer m on m.Id = p.Manufacturer_id where VendorOrder <=6 order by NEWID() SET ROWCOUNT 0 (I've tried to somewhat format it to make it cleaner, but I don't think it really helps) Running this query with an execution plan shows that for the majority of these tables, it's doing a Clustered Index Seek. There are two operations that take up roughly 90% of the time: Index Seek (Nonclustered) on StoreToProduct: 17%. This table just contains the key of the store, and the key of the product. It seems that NHibernate decided not to make a composite key when making this table, but I'm not concerned about this at this point, as compared to the other seek... Clustered Index Seek on Product: 69%. I really have no clue how I could make this one more performant. On categories without a lot of products, performance is acceptable (<50ms), however larger categories can take a few hundred ms, with the largest category taking 3s (which has about 170k products). It seems I have two ways to go from this point: Somehow optimize the existing query and table indices to lower the query time. As almost every expensive operation is already a clustered index scan, I don't know what could be done there. The inner query could be tuned to not return all of the possible rows for that category, but I am unsure how to do this, and maintain the requirements (random products, with a good mix of manufacturers) Denormalize this data for the purpose of this query when doing the once a week import. However, I am unsure how to do this and maintain the requirements. Does anyone have any input on either of these items?

    Read the article

  • What do you think about gems and eggs? Alternatives?

    - by Juanlu001
    I've read recently some criticism (see 1, 2, 3) about the packaging distribution system of two popular programming languages: Ruby gems and Python eggs. The most important argument stated against them is that they replace the system package manager (in case there is one, as in every Linux distribution), which makes eggs and gems difficult to track, code difficult to patch, and so on. Are actually eggs and gems right? In case not, are there any alternatives to distributing Python or Ruby modules? Should developers focus on taking advantage of package manager (apt-get, pacman, ...) capabilities?

    Read the article

  • What companies do what I'm interested in?

    - by Alex
    I'm a systems guy. People change their concentrations to avoid taking operating systems, while I took it during my first semester after transferring. I'm taking compilers and networks now, and I think they're awesome. And yet there are so many job postings looking for people to do work in things like web development, and so few postings looking for people to work in kernel hacking or network engineering. What sorts of companies do these things? I'm currently awaiting a contract in the mail for an internship with VMWare, so I'm not out of a job for the summer. Still, I'd like to companies do these things.

    Read the article

  • How to manage a growing team?

    - by Andra
    I'm the admin assistant of the CTO and our organization has recently experienced a lot of growth. Within six months, we have merged with another organization and our Dev team has grown from 8 to 16, with another 8 people in QA. What we're dealing with now is a highly technical individual, with little patience, managing a much larger team than he's accustomed to, 40% of which is junior as well as an increase in the number of projects. Needless to say, my boss is being pulled in too many directions at once. How can I help him manage his workload and his team so that the team feels they're getting enough help and support and remain effective? Also, where can I find additional resources on managing a growing team?

    Read the article

  • Self-Executing Anonymous Function vs Prototype

    - by Robotsushi
    In Javascript there are a few clearly prominent techniques for create and manage classes/namespaces in javascript. I am curious what situations warrant using one technique vs. the other. I want to pick one and stick with it moving forward. I write enterprise code that is maintained and shared across multiple teams, and I want to know what is the best practice when writing maintainable javascript ? I tend to prefer Self-Executing Anonymous Functions however I am curious what the community vote is on these techniques. Prototype : function obj() { } obj.prototype.test = function() { alert('Hello?'); }; var obj2 = new obj(); obj2.test(); Self-Closing Anonymous Function : //Self-Executing Anonymous Function (function( skillet, $, undefined ) { //Private Property var isHot = true; //Public Property skillet.ingredient = "Bacon Strips"; //Public Method skillet.fry = function() { var oliveOil; addItem( "\t\n Butter \n\t" ); addItem( oliveOil ); console.log( "Frying " + skillet.ingredient ); }; //Private Method function addItem( item ) { if ( item !== undefined ) { console.log( "Adding " + $.trim(item) ); } } }( window.skillet = window.skillet || {}, jQuery )); //Public Properties console.log( skillet.ingredient ); //Bacon Strips //Public Methods skillet.fry(); //Adding Butter & Fraying Bacon Strips //Adding a Public Property skillet.quantity = "12"; console.log( skillet.quantity ); //12 //Adding New Functionality to the Skillet (function( skillet, $, undefined ) { //Private Property var amountOfGrease = "1 Cup"; //Public Method skillet.toString = function() { console.log( skillet.quantity + " " + skillet.ingredient + " & " + amountOfGrease + " of Grease" ); console.log( isHot ? "Hot" : "Cold" ); }; }( window.skillet = window.skillet || {}, jQuery )); //end of skillet definition try { //12 Bacon Strips & 1 Cup of Grease skillet.toString(); //Throws Exception } catch( e ) { console.log( e.message ); //isHot is not defined } I feel that I should mention that the Self-Executing Anonymous Function is the pattern used by the jQuery team. Update When I asked this question I didn't truly see the importance of what I was trying to understand. The real issue at hand is whether or not to use new to create instances of your objects or to use patterns which do not require constructors of the use of the new keyword. I added my own answer, because in my opinion we should make use of patterns which don't use the new keyword. For more information please see my answer.

    Read the article

  • Java word scramble game [closed]

    - by Dan
    I'm working on code for a word scramble game in Java. I know the code itself is full of bugs right now, but my main focus is getting a vector of strings broken into two separate vectors containing hints and words. The text file that the strings are taken from has a colon separating them. So here is what I have so far. public WordApp() { inputRow = new TextInputBox(); inputRow.setLocation(200,100); phrases = new Vector <String>(FileUtilities.getStrings()); v_hints = new Vector<String>(); v_words = new Vector<String>(); textBox = new TextBox(200,100); textBox.setLocation(200,200); textBox.setText(scrambled + "\n\n Time Left: \t" + seconds/10 + "\n Score: \t" + score); hintBox = new TextBox(200,200); hintBox.setLocation(300,400); hintBox.hide(); Iterator <String> categorize = phrases.iterator(); while(categorize.hasNext()) { int index = phrases.indexOf(":"); String element = categorize.next(); v_words.add(element.substring(0,index)); v_hints.add(element.substring(index +1)); phrases.remove(index); System.out.print(index); } The FileUtilities file was given to us by the pofessor, here it is. import java.util.*; import java.io.*; public class FileUtilities { private static final String FILE_NAME = "javawords.txt"; //------------------ getStrings ------------------------- // // returns a Vector of Strings // Each string is of the form: word:hint // where word contains no spaces. // The words and hints are read from FILE_NAME // // public static Vector<String> getStrings ( ) { Vector<String> words = new Vector<String>(); File file = new File( FILE_NAME ); Scanner scanFile; try { scanFile = new Scanner( file); } catch ( IOException e) { System.err.println( "LineInput Error: " + e.getMessage() ); return null; } while ( scanFile.hasNextLine() ) { // read the word and follow it by a colon String s = scanFile.nextLine().trim().toUpperCase() + ":"; if( s.length()>1 && scanFile.hasNextLine() ) { // append the hint and add to collection s+= scanFile.nextLine().trim(); words.add(s); } } // shuffle Collections.shuffle(words); return words; } }

    Read the article

  • Deprecated vs. Denigrated in JavaDoc?

    - by jschoen
    In the JavaDoc for X509Certificate getSubjectDN() it states: Denigrated, replaced by getSubjectX500Principal(). I am used to seeing Deprecated in the for methods that should not be used any longer, but not Denigrated. I found a bug report about this particular case where it was closed with comment: This isn't a bug. "Deprecated" is meant to be used only in serious cases. When we are using a method that is Deprecated, the general suggested action is to stop using the method. So what is the suggested action when a method is marked as Denigrated?

    Read the article

  • Modelling highly specific business requirements

    - by AndyBursh
    How can one go about modelling highly specific business requirements, which have no precedent in the system? Take for example the following requirement: When a purchase order contains N lines, is over X value in total and is being recorded against project Y, an email needs to be sent to persons A and B with the details This requirement supplements other requirements surrounding purchase orders, but comes in at a much later date in response to some ongoing problem elsewhere in the business. Persons A and B are not part of any role or group in the system, and don't hold any specific responsibility; they are simply the two people the business has appointed to receive these emails in this very specific case. Projects are also data driven, so project Y has no special properties to distinguish it from any other project. The only way to identify it is to compare its identifier to a magic number. How can one go about modelling this kind of case without introducing too much additional complexity? That I can think of right now, there are a couple of options. Perform the checks and actions inline with the existing code. Here we find the correct spot in the code, check the conditions in the requirement and send the emails to hardcoded addresses. Of course this is fraught with issues. At the very least it stops working if one of these people leaves or changes their email address. At worst you have to ensure that any tests and test data are aware that additional actions are taken for a specific set of criteria. Introduce some form of events system. Here we introduce an eventing system, so that we might react to some event, and fulfil the requirement outside of the usual path of execution. This sounds like a cleaner solution than option 1, but the work involved is ultimately probably slightly overkill for this one small requirement. That said, having it in place does allow the system to handle these kinds of specific requirements consistently and easily in the future. Are there any other (good/better) ways of handling highly specific requirements? I mean other than telling the other parts of the business no!

    Read the article

  • How to approach scrum task burn down when tasks have multiple peoples involvement?

    - by AgileMan
    In my company, a single task can never be completed by one individual. There is going to be a separate person to QA and Code Review each task. What this means is that each individual will give their estimates, per task, as to how much time it will take to complete. The problem is, how should I approach burn down? If I aggregate the hours together, assume the following estimate: 10 hrs - Dev time 4 hrs - QA 4 hrs - Code Review. Task Estimate = 18hrs At the end of each day I ask that the task be updated with "how much time is left until it is done". However, each person generally just thinks about their part of it. Should they mark the effort remaining, and then ADD the effort estimates to that? How are you guys doing this? UPDATE To help clarify a few things, at my organization each Task within a story requires 3 people. Someone to develop the task. (do unit tests, ect...) A QA specialist to review task (they primarily do integration and regression tests) A Tech lead to do code review. I don't think there is a wrong way or a right way, but this is our way ... and that won't be changing. We work as a team to complete even the smallest level of a story whenever possible. You cannot actually test if something works until it is dev complete, and you cannot review the quality of the code either ... so the best you can do is split things up into small logical slices so that the bare minimum functionality can be tested and reviewed as early into the process as possible. My question to those that work this way would be how to burn down a "task" when they are setup this way. Unless a Task has it's own sub-tasks (which JIRA doesn't allow) ... I'm not sure the best way to accomplish tracking "what's left" on a daily basis.

    Read the article

  • Provide an OnChange event for an internal property which is controlled externally?

    - by NGLN
    For fun and by request I am updating this ImageGrid component, a kind of listbox for images that has a FileNames property of type TStrings. For ease of writing, I have been misusing its FileNames.Objects property for bitmap storage. But since the TStrings type suggests that users of the component could or would want to use the Objects property for custom data, e.g. like TListBox.Items, I am rewriting the component to store the bitmaps elsewhere and leave FileNames.Objects untouched for unknown future usage. Now I am wondering whether to provide an OnChange event. And if so, whether to fire it when one or more FileNames.Objects changes. Trying to answer it myself, I dove in Delphi's own VCL and stumbled on: TMemo: has an OnChange event, but ignores Lines.Objects TListBox: has no OnChange event, but is capable of storing Items.Objects TStringGrid: has no OnChange event, but is capable of storing Objects, Rows.Objects, Cols.Objects So now I am somewhat puzzeled, because I cannot imagine Borland's developers didn't add events for several Objects properties out of ease. Sure, when a user changes a FileNames.Object in my component, he knows he does and could implement appropriate interaction himself. But wouldn't it be convenient when the component does automatically? What would you expect from this component in this regard?

    Read the article

  • Finding Internship Opportunities

    - by mbreedlove
    I am a senior in high school majoring in Computer Science next fall. This summer, I would like to intern at a company relevant to my interests of investment banking and software engineering. What would be some possible ways to find openings? How should I contact them? (E-mail, phone, etc.) Should I prepare and submit a CV? I feel it might be a little dull as it would have no experience or references. After contacting someone about an internship, should I follow up, or just wait for them to contact me? Is there anything else I should do or be aware of?

    Read the article

  • Using Clojure instead of Python for scalability (multi core) reasons, good idea?

    - by Vandell
    After reading http://clojure.org/rationale and other performance comparisons between Clojure and many languages, I started to think that apart from ease of use, I shouldn't be coding in Python anymore, but in Clojure instead. Actually, I began to fill irresponsisble for not learning clojure seeing it's benefits. Does it make sense? Can't I make really efficient use of all cores using a more imperative language like Python, than a lisp dialect or other functional language? It seems that all the benefits of it come from using immutable data, can't I do just that in Python and have all the benefits? I once started to learn some Common Lisp, read and done almost all exercices from a book I borrowod from my university library (I found it to be pretty good, despite it's low popularity on Amazon). But, after a while, I got myself struggling to much to do some simple things. I think there's somethings that are more imperative in their nature, that makes it difficult to model those thins in a functional way, I guess. The thing is, is Python as powerful as Clojure for building applications that takes advantages of this new multi core future? Note that I don't think that using semaphores, lock mechanisms or other similar concurrency mechanism are good alternatives to Clojure 'automatic' parallelization.

    Read the article

  • Is curl something that's not expected to be installed on servers

    - by Ieyasu Sawada
    Is curl something that's not expected to be installed on servers? I'm working for a small development shop and 99% of the problems that I'm having is regarding curl. Most of the projects that I'm working on involves calling a web API. Most web API's suggests using curl by default since you have to pass in a POST data in the request. Every time I complain to my senior that the server that I'm working on doesn't have curl installed the excuse that I'm always getting is that curl is not needed you can always use file_get_contents. So the question: is curl something that's not expected to be installed on servers that runs PHP, should I always develop using file_get_contents and not curl? Are there any advantages of using file_get_contents over curl or vise versa? If it helps, the context here is wordpress plugins, shopify apps, drupal modules and other bits of code that a lot of people can install.

    Read the article

  • in memory datastore in haskell

    - by Simon
    I want to implement an in memory datastore for a web service in Haskell. I want to run transactions in the stm monad. When I google hash table steam Haskell I only get this: Data. BTree. HashTable. STM. The module name and complexities suggest that this is implemented as a tree. I would think that an array would be more efficient for mutable hash tables. Is there a reason to avoid using an array for an STM hashtable? Do I gain anything with this stem hash table or should I just use a steam ref to an IntMap?

    Read the article

  • I need an approach to the problem of preventing inserting duplicate records into the database

    - by Maurice
    Apologies is this question is asked on the incorrect "stack" A webservice that I call returns a list of data. The data from the webservice is updated periodically, so a call to the webservice done in one hour could return the same data as a call done in an hour. Also, the data is returned based on a start and end date. We have multiple users that can run the webservice search, and duplicate data is most likely to be returned (especially for historical data). However I don't want to insert this duplicate data in the database. I've created a db table in which the data is stored (most important columns are) Id int autoincrement PK Date date not null --The date to which the data set belongs. LastUpdate date not null --The date the data set was last updated. UserName varchar(50) --The name of the user doing the search. I use sql server 2008 express with c# 4.0 and visual studio 2010. Entity Framework is used as the ORM. If stored procedures could be avoided in the proposed solution, then that will be a plus. Another way of looking interpreting what I'm asking a solution for is as follows: I have a million unique records in my table. A user does a new search. The search results from the user contains around 300k of the data that is already in the db. An efficient solution to finding an inserting only the unique records is needed.

    Read the article

  • Organization standards for large programs

    - by Chronicide
    I'm the only software developer at the company where I work. I was hired straight out of college, and I've been working here for several years. When I started, eveeryone was managing their own data as they saw fit (lots of filing cabinets). Until recently, I've only been tasked with small standalone projects to help with simple workflows. In the beginning of the year I was asked to make a replacement for their HR software. I used SQL Server, Entity Framework, WPF, along with MVVM and Repository/Unit of work patterns. It was a huge hit. I was very happy with how it went, and it was a very solid program. As such, my employer asked me to expand this program into a corporate dashboard that tracks all of their various corporate data domains (People, Salary, Vehicles/Assets, Statistics, etc.) I use integrated authentication, and due to the initial HR build, I can map users to people in positions, so I know who is who when they open the program, and I can show each person a customized dashboard given their work functions. My concern is that I've never worked on such a large project. I'm planning, meeting with end users, developing, documenting, testing and deploying it on my own. I'm part way through the second addition, and I'm seeing that my code is getting disorganized. It's still programmed well, I'm just struggling with the organization of namespaces, classes and the database model. Are there any good guidelines to follow that will help me keep everything straight? As I have it now, I have folders for Data, Repositories/Unit of Work, Views, View Models, XAML Resources and Miscellaneous Utilities. Should I make parent folders for each data domain? Should I make separate EF models per domain instead of the one I have for the entire database? Are there any standards out there for organizing large programs that span multiple data domains? I would appreciate any suggestions.

    Read the article

  • What is the best way to handle dynamic content?

    - by user1561753
    So we run a site where there are elements of the interface that could potentially be changed at any moment in the backend. Specifically we run a web service where certain functions are loaded dynamically. However, there are times where we remove certian functions and we want the experience to be as seamless for the user as possible. Now we've considered a few methods of solving this Ping the server every few seconds. If the functions are outdated/no longer available refresh the users page. While this would work the best, I feel like having that much IO can't be too good When the user clicks a function, if it's outdated/no longer available, alert them in the response and refresh the page. This would also work fairly well. I guess I'm more wondering how web apps like Google Docs work where you have content that has to be synced up across multiple users and that isn't more than a few seconds outdated Sorry if this isn't the best place to ask this. I figured this was more of a site architecture question and that this might be the place to ask it over SO.

    Read the article

  • Where to start for writing a simple java IDE?

    - by AedonEtLIRA
    I would like to start working on my own custom IDE. The biggest reason I want to work on the IDE is to help me gain an even greater, more intimate understanding of java (and other languages I add into it.) I don't want to do anything super fancy or revolutionary, I'd be happy if I could create something as compact as the BlueJ IDE I used in high school and be content. I have a few question on the specifics of the task that I hope I can get cleared up before I start investing time in this: Is there anything I should be aware of when writing the parser? Does anyone have any pointers that I should be aware of; pitfalls, brick walls or other constraints?

    Read the article

  • PHP, when to use iterators, how to buffer results?

    - by Jon L.
    When is it best to use Iterators in PHP, and how can they be implemented to best avoid loading all objects into memory simultaneously? Do any constructs exist in PHP so that we can queue up results of an operation for use with an Iterator, while again avoiding loading all objects into memory simultaneously? An example would be a curl HTTP request against a REST server In the case of an HTTP request that returns all results at once (a la curl), would we be better off to go with streaming results, and if so, are there any limitations or pitfalls to be aware of? If using streaming, is it better to replace curl with a PHP native stream/socket? My intention is to implement Iterators for a REST client, and separately a document ORM that I'm maintaining, but only if I can do so while gaining benefits from reduced memory usage, increased performance, etc. Thanks in advance for any responses :-)

    Read the article

  • Bug Tracking Etiquete - Necromany or Duplicate?

    - by Shauna
    I came across a really old (2+ years) feature request issue in a bug tracker for an open source project that was marked as "resolved (won't fix)" due to the lack of tools required to make the requested enhancement, but since the determination was made, new tools have been developed that would allow it to be resolved, and I'd like to bring that to the attention of the community for that application. However, I'm not sure as to what the generally accepted etiquette is for bug tracking in cases like this. Obviously, if the system explicitly states to not duplicate and will actively mark new items as duplicates (much in the way the SE sites do), then the answer would be to follow what the system says. But what about when the system doesn't explicitly say that, or a new user can't easily find a place that says with the system's preference is? Is it generally considered better to err on the side of duplication or necromancy? Does this differ depending on whether it's a bug or a feature request?

    Read the article

  • What makes an application memory bandwidth bound?

    - by TheLQ
    This has been something that's been bothering me for a while: What makes an application memory bandwidth bound? For example, take this monstrosity of a computer that calculated the 5 trillionth digit of pi (and later 10 trillionth digit). I was surprised that they choose the lower but faster 98 GB RAM at 1066 MHz instead of the larger but slower 144 GB at 800 MHz. This is especially surprising considering they are using 22 TB HD array to store the results from computation; more RAM means less need for hard drives. Maybe its because I don't write applications for HPC servers, but how would RAM be the bottleneck? Are there any other non-HPC applications that usually run into this problem?

    Read the article

  • My Only Gripe With Programming

    - by David Espejo
    Is that im having trouble practicing problems. Even if I decide to practice the problems from my C++ book, they dont give any idea of the way the solution(program) should look like, so that I may compare to see if my program is similar in anyway. My book gives me to many generic "Write a program to do "this" " projects without really showing a concrete example of what "this" really is. In other words How Do I Know That I did "that". One problem in my book said to write a program that calculates the sales tax on a given item????? First of all slase tax differs on state(whats the state,) whats the item(a house, a dog,) How can I check this to see if im right. Programming books dont have answer keys! I know that there is no ABSOLUTE answer, thats just silly, programs can be written in many ways, but a sample of what one would look like based of the difficulty of the problem would really help! Is there a solution to this, maby a book that has worked out examples for the problems they give , or online sources that do something similar.(is there such thing as a programming book with an answer key?)

    Read the article

< Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >