Search Results

Search found 25198 results on 1008 pages for 'non programmers'.

Page 398/1008 | < Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >

  • Are UML class diagrams adequated to design javascript systems?

    - by Vandell
    Given that UML is oriented towards a more classic approach to object orientation, is it still usable in a reliable way to design javascript systems? One specific problem that I can see is that class diagrams are, in fact, a structural view of the system, and javascript is more behaviour driven, how can you deal with it? Please, keep in mind that I'm not talking abot the real world domain here, It's a model for the solution that I'm trying to achieve.

    Read the article

  • Will learning ColdFusion help me advance my programming skills? [closed]

    - by chhantyal
    Currently I am working with a small web development company. We use jQuery on front-end, Coldfusion for back-end and MySQL as our database. We just started to use HTML5 and CSS3. This is my first internship and job. I know the basics of Python and want to add Django or Ruby on Rails on my skills set. In addition, I want to advance my programming skills with Machine Learning, Compilers, NoSQL and Unix Hacking. I also find front end web development pretty interesting. Should I focus on front-end and become skilled on HTML5/CSS3/Javascript? Or dive into back-end learning ColdFusion. I will probably leave the company after a year since I want to work with great product start-ups. And I live in India, where ColdFusion is not popular. Will learning ColdFusion help me become better programmer?

    Read the article

  • How to select li range and move them inside div [closed]

    - by user8012
    How can I select a range of li and move them inside div element. For example: <ul> <li>test</li> <li>test</li> <li>test</li> <li>test</li> <li>test</li> <li>test</li> <li>test</li> <li>test</li> <li>test</li> <li>test</li> <li>test</li> <li>test</li> </ul> If the li is more than 6 create a column like this: <ul> <li> <div class="column1"> <ul> <li>test</li> <li>test</li> <li>test</li> <li>test</li> <li>test</li> <li>test</li> </ul> </div> </li> <li> <div class="column2"> <ul> <li>test</li> <li>test</li> <li>test</li> <li>test</li> <li>test</li> <li>test</li> </ul> </div> </li> </ul>

    Read the article

  • Single or multiple return statements in a function [on hold]

    - by Juan Carlos Coto
    When writing a function that can have several different return values, particularly when different branches of code return different values, what is the cleanest or sanest way of returning? Please note the following are really contrived examples meant only to illustrate different styles. Example 1: Single return def my_function(): if some_condition: return_value = 1 elif another_condition: return_value = 2 else: return_value = 3 return return_value Example 2: Multiple returns def my_function(): if some_condition: return 1 elif another_condition: return 2 else: return 3 The second example seems simpler and is perhaps more readable. The first one, however, might describe the overall logic a bit better (the conditions affect the assignment of the value, not whether it's returned or not). Is the second way preferable to the first? Why?

    Read the article

  • How many tiers should my models have in a DB driven web app?

    - by Hanno Fietz
    In my experience, the "Model" in MVC is often not really a single layer. I would regularly have "backend" models and "frontend" models, where the backend ones would have properties that I want to hide from the UI code, and the frontend ones have derived or composite properties which are more meaningful in the context of the UI. Recently, I have started to introduce a third layer in between when database normalization created tables that did not really match the conceptual domain objects anymore. In those projects I have model classes that are equivalent with the DB structure, these become components of objects that represent the domain model, and finally these are filtered and amended for displaying information to the user. Are there patterns or guidelines for how, when and why to organize the data model across application layers?

    Read the article

  • TDD: Write a separate test for object initialization or relying on other tests exercising it

    - by DXM
    This seems to be the common pattern that's emerging in some of the tests I've worked on lately. We have a class, and quite often this is legacy code whose design can't be easily altered, which has a bunch of member variables. There's some kind of "Initialize" or "Load" function which would put an object into a valid state. Only after it is initialized/loaded, are the members in the proper state so that other methods can be exercised. So when we start writing tests, first test is "TestLoad" and all we put in there is exercising initialization logic. Then we might add one (or few) TestLoadFailureXXX tests and those are definitely valuable. Then we start writing tests to verify other behaviors but all of them require the object to be loaded. So they all start by running exactly the same code as "TestLoad". So my question: Is TestLoad even necessary? Do you take it and let other tests simply exercise the loading? Or leave it so things are more explicit? I know that each unit test function should have no (or as little as possible) overlap with other test functions, but it seems like in cases of loading, this is unavoidable. And whether we like it or not, if something in the loading code breaks, we will end up with a whole test suite of failures. Is there another approach that I might be missing here? Thank you for the responses. It definitely makes sense that you want to see "InitializationTest" and if that fails you know where to start looking. In case it matters, this question is mostly about C++ and we use CppUnit framework. And now, thanks to sleske, I'll be constantly wishing that CppUnit supported test dependencies. Might have to hack something in one of these days :)

    Read the article

  • Right mix of planning and programing on a new project

    - by WarrenFaith
    I am about to start a new project (a game, but thats unimportant). The basic idea is in my head but not all the details. I don't want to start programming without planning, but I am seriously fighting my urge to just do it. I want some planning before to prevent refactoring the whole app just because a new feature I could think of requires it. On the other hand, I don't want to plan multiple months (spare time) and start that because I have some fear that I will lose my motivation in this time. What I am looking for is a way of combining both without one dominating the other. Should I realize the project in the way of scrum? Should I creating user stories and then realize them? Should I work feature driven? (I have some experience in scrum and the classic "specification to code" way.)

    Read the article

  • Why does working processors harder use more electrical power?

    - by GazTheDestroyer
    Back in the mists of time when I started coding, at least as far as I'm aware, processors all used a fixed amount of power. There was no such thing as a processor being "idle". These days there are all sorts of technologies for reducing power usage when the processor is not very busy, mostly by dynamically reducing the clock rate. My question is why does running at a lower clock rate use less power? My mental picture of a processor is of a reference voltage (say 5V) representing a binary 1, and 0V representing 0. Therefore I tend to think of of a constant 5V being applied across the entire chip, with the various logic gates disconnecting this voltage when "off", meaning a constant amount of power is being used. The rate at which these gates are turned on and off seems to have no relation to the power used. I have no doubt this is a hopelessly naive picture, but I am no electrical engineer. Can someone explain what's really going on with frequency scaling, and how it saves power. Are there any other ways that a processor uses more or less power depending on state? eg Does it use more power if more gates are open? How are mobile / low power processors different from their desktop cousins? Are they just simpler (less transistors?), or is there some other fundamental design difference?

    Read the article

  • Does C# give you "less rope to hang yourself" than C++?

    - by user115232
    Joel Spolsky characterized C++ as "enough rope to hang yourself". Actually, he was summarizing "Effective C++" by Scott Meyers: It's a book that basically says, C++ is enough rope to hang yourself, and then a couple of extra miles of rope, and then a couple of suicide pills that are disguised as M&Ms... I don't have a copy of the book, but there are indications that much of the book relates to pitfalls of managing memory which seem like would be rendered moot in C# because the runtime manages those issues for you. Here are my questions: Does C# avoid pitfalls that are avoided in C++ only by careful programming? If so, to what degree and how are they avoided? Are there new, different pitfalls in C# that a new C# programmer should be aware of? If so, why couldn't they be avoided by the design of C#?

    Read the article

  • Design for an interface implementation that provides additional functionality

    - by Limbo Exile
    There is a design problem that I came upon while implementing an interface: Let's say there is a Device interface that promises to provide functionalities PerformA() and GetB(). This interface will be implemented for multiple models of a device. What happens if one model has an additional functionality CheckC() which doesn't have equivalents in other implementations? I came up with different solutions, none of which seems to comply with interface design guidelines: To add CheckC() method to the interface and leave one of its implementations empty: interface ISomeDevice { void PerformA(); int GetB(); bool CheckC(); } class DeviceModel1 : ISomeDevice { public void PerformA() { // do stuff } public int GetB() { return 1; } public bool CheckC() { bool res; // assign res a value based on some validation return res; } } class DeviceModel2 : ISomeDevice { public void PerformA() { // do stuff } public int GetB() { return 1; } public bool CheckC() { return true; // without checking anything } } This solution seems incorrect as a class implements an interface without truly implementing all the demanded methods. To leave out CheckC() method from the interface and to use explicit cast in order to call it: interface ISomeDevice { void PerformA(); int GetB(); } class DeviceModel1 : ISomeDevice { public void PerformA() { // do stuff } public int GetB() { return 1; } public bool CheckC() { bool res; // assign res a value based on some validation return res; } } class DeviceModel2 : ISomeDevice { public void PerformA() { // do stuff } public int GetB() { return 1; } } class DeviceManager { private ISomeDevice myDevice; public void ManageDevice(bool newDeviceModel) { myDevice = (newDeviceModel) ? new DeviceModel1() : new DeviceModel2(); myDevice.PerformA(); int b = myDevice.GetB(); if (newDeviceModel) { DeviceModel1 newDevice = myDevice as DeviceModel1; bool c = newDevice.CheckC(); } } } This solution seems to make the interface inconsistent. For the device that supports CheckC(): to add the logic of CheckC() into the logic of another method that is present in the interface. This solution is not always possible. So, what is the correct design to be used in such cases? Maybe creating an interface should be abandoned altogether in favor of another design?

    Read the article

  • Any good reason open files in text mode?

    - by Tinctorius
    (Almost-)POSIX-compliant operating systems and Windows are known to distinguish between 'binary mode' and 'text mode' file I/O. While the former mode doesn't transform any data between the actual file or stream and the application, the latter 'translates' the contents to some standard format in a platform-specific manner: line endings are transparently translated to '\n' in C, and some platforms (CP/M, DOS and Windows) cut off a file when a byte with value 0x1A is found. These transformations seem a little useless to me. People share files between computers with different operating systems. Text mode would cause some data to be handled differently across some platforms, so when this matters, one would probably use binary mode instead. As an example: while Windows uses the sequence CR LF to end a line in text mode, UNIX text mode will not treat CR as part of the line ending sequence. Applications would have to filter that noise themselves. Older Mac versions only use CR in text mode as line endings, so neither UNIX nor Windows would understand its files. If this matters, a portable application would probably implement the parsing by itself instead of using text mode. Implementing newline interpretation in the parser might also remove some overhead of using text mode, as buffers would need to be rewritten (and possibly resized) before returning to the application, while this may be less efficient than when it would happen in the application instead. So, my question is: is there any good reason to still rely on the host OS to translate line endings and file truncation?

    Read the article

  • Is goto to improve DRY-ness OK?

    - by Marco Scannadinari
    My code has many checks to detect errors in various cases (many conditions would result in the same error), inside a function returning an error struct. Instead of looking like this: err_struct myfunc(...) { err_struct error = { .error = false }; ... if(something) { error.error = true; error.description = "invalid input"; return error; } ... case 1024: error.error = true; error.description = "invalid input"; // same error, but different detection scenario return error; break; // don't comment on this break please (EDIT: pun unintended) ... Is use of goto in the following context considered better than the previous example? err_struct myfunc(...) { err_struct error = { .error = false }; ... if(something) goto invalid_input; ... case 1024: goto invalid_input; break; return error; invalid_input: error.error = true; error.description = "invalid input"; return error;

    Read the article

  • Will Apple abandon OpenCL?

    - by John
    I am developing OpenCL applications, amongst others for MacOS. The new Macbook pro 13 inch comes with an Intel HD Graphics 3000 card so it seams reasonable to assume all their mainstream computers like Macbook and Mac mini will also come out with this Intel graphics card soon. OpenCL is not available for Intel graphics cards. Intel having a terrible reputation in developing graphics drivers and Apple knowing this makes me wonder Apple is abandoning OpenCL already again. Especially considering OpenCL should run anywhere, not only on high end systems. Developing applications only for the high end Macs with dedicated graphics hardware or for previous generation hardware with the Geforce 320M would not be a feasible option for me. Does anybody have any thoughts on this?

    Read the article

  • Are the National Computer Science Academy certifications worth it?

    - by Horacio Nuñez
    Hi to every one. I have a question regarding the real value of having NCSA's certifications. Today I reach their site and I easily passed the JavaScript certification within minutes, but I never reach questions related to Literal Javascript Notation (Json), closures or browser specific apis. This facts let me to doubt a bit of the real value of the test (and the proper certification you can have if you pay them $34), but maybe Im wrong and just earned a respected certification within the States for easy questions... in wich case I can spend some time doing other certifications on the same site. Did you have an NCSA certification and think is worth having it in your resume, or you know of a better certification program? thanks in advance, and looking forward to see your considerations.

    Read the article

  • What user-friendly term should I use for a view that lives under a tab in a tab bar app?

    - by Emile Cormier
    My app uses a tab bar controller. In the user documentation, I'm not sure what name to use for a view that lives under a tab. For example, the app has a Settings tab. In the user documentation, I have a sentence that goes something like this: This threshold can be adjusted in the Settings tab. "Settings tab" is not terribly user-friendly. What would be a better term than "tab"? I've looked though Apple's Human Interface Guideline, but I can't find what would be the official user-friendly term for "view that lives under a tab".

    Read the article

  • Algorithm to calculate trajectories from vector field

    - by cheeesus
    I have a two-dimensional vector field, i.e., for each point (x, y) I have a vector (u, v), whereas u and v are functions of x and y. This vector field canonically defines a set of trajectories, i.e. a set of paths a particle would take if it follows along the vector field. In the following image, the vector field is depicted in red, and there are four trajectories which are partly visible, depicted in dark red: I need an algorithm which efficiently calculates some trajectories for a given vector field. The trajectories must satisfy some kind of minimum denseness in the plane (for every point in the plane we must have a 'nearby' trajectory), or some other condition to get a reasonable set of trajectories. I could not find anything useful on Google on this, and Stackexchange doesn't seem to handle the topic either. Before I start devising such an algorithm by myself: Are there any known algorithms for this problem? What is their name, for which keywords do I have to search?

    Read the article

  • How can you predict the time it will take for two processes in two different machines in a cluster to communicate?

    - by Dokkat
    I am trying to develop a computing application which needs a lot of memory (500gb). Buying a single machine for that is overly expensive. I can, though, buy ~100 small instances on Digital Ocean or similar, divide the memory in blocks and use TCP to emulate shared memory between the instances. Now, my question is: how can I measure/predict the time it will take for two processes in two different machines like that to share information, in comparison to IPC and shared memory? Are there rules of thumb? I don't want exact values, but knowing more or less how much faster one is would be very helpful in visualising the feasibility of this approach.

    Read the article

  • Why would Java app make RPC call to itself?

    - by amphibient
    I am working with a multithreaded homegrown multi-module app in my new job. We use the the Thrift protocol to communicate RPC calls between different stand-alone applications in a distributed system. One of them listens on multiple ports and I just noticed that it actually makes an RPC call to itself from one thread invoked from one socket it listens to (web service call) to another port within the same app. I verified that it could accomplish the same thing if it just went and directly called the method that the remote procedure ultimately invokes as it is all within the same application, same JVM. To make it even more mysterious, the call is completely synchronous, i.e. no callbacks involved. The first thread totally sits and waits until it makes a call across the wire to itself and comes back. Now, I am perplexed why anybody would do it this way. It seems like calling somebody on the phone that sits in the same room as you do. Can anybody provide an explanation why the developer before me would come up with the above mentioned model? Maybe there is a reason and I am missing something.

    Read the article

  • how to fix fatal error jvmti.h No such file or directory compilation terminated on c code ubuntu? [on hold]

    - by Blue Rose
    how to fix fatal error jvmti.h No such file or directory compilation terminated c code ubuntu? my c code is: #include "jvmti.h" JNIEXPORT jint JNICALL Agent_OnLoad(JavaVM *jvm, char *options, void *reserved) { /* We return JNI_OK to signify success */ printf("\nBushra Za'areer,\n\n"); return JNI_OK; } JNIEXPORT void JNICALL Agent_OnUnload(JavaVM *vm) { } type this command in terminal: gcc -Wall -W -Werror first_agent.c -o firstagent first_agent.c:1:19: fatal error: jvmti.h: No such file or directory compilation terminated. where java jdk version javac 1.7.0_25 where gcc version gcc version 4.7.3 (Ubuntu/Linaro 4.7.3-2ubuntu4) here should update gcc version to 4.8?

    Read the article

  • What is a good strategy for binding view objects to model objects in C++?

    - by B.J.
    Imagine I have a rich data model that is represented by a hierarchy of objects. I also have a view hierarchy with views that can extract required data from model objects and display the data (and allow the user to manipulate the data). Actually, there could be multiple view hierarchies that can represent and manipulate the model (e.g. an overview-detail view and a direct manipulation view). My current approach for this is for the controller layer to store a reference to the underlying model object in the View object. The view object can then get the current data from the model for display, and can send the model object messages to update the data. View objects are effectively observers of the model objects and the model objects broadcast notifications when properties change. This approach allows all the views to update simultaneously when any view changes the model. Implemented carefully, this all works. However, it does require a lot of work to ensure that no view or model objects hold any stale references to model objects. The user can delete model objects or sub-hierarchies of the model at any time. Ensuring that all the view objects that hold references to the model objects that have been deleted is time-consuming and difficult. It feels like the approach I have been taking is not especially clean; while I don't want to have to have explicit code in the controller layer for mediating the communication between the views and the model, it seems like there must be a better (implicit) approach for establishing bindings between the view and the model and between related model objects. In particular, I am looking for an approach (in C++) that understands two key points: There is a many to one relationship between view and model objects If the underlying model object is destroyed, all the dependent view objects must be cleaned up so that no stale references exist While shared_ptr and weak_ptr can be used to manage the lifetimes of the underlying model objects and allows for weak references from the view to the model, they don't provide for notification of the destruction of the underlying object (they do in the sense that the use of a stale weak_ptr allows for notification), but I need an approach that notifies the dependent objects that their weak reference is going away. Can anyone suggest a good strategy to manage this?

    Read the article

  • What constitutes proper use of threads in programming?

    - by Smith
    I am tired of hearing people recommend that you should use only one thread per process, while many programs use up to 100 per process! take for example some common programs vb.net ide uses about 25 thread when not debugging System uses about 100 chrome uses about 19 Avira uses more than about 50 Any time I post a thread related question, I am reminded almost every time that I should not use more that one thread per process, and all the programs I mention above are ruining on my system with a single processor. What constitutes proper use of threads in programming? Please make general comment, but I'd prefer .NET framework thanks EDIT changed processor to process

    Read the article

  • Is there a secure way to add a database troubleshooting page to an application?

    - by Josh Yeager
    My team makes a product (business management software) that our customers install on their own servers. The product uses a SQL database for data storage and app configuration. There have been quite a few cases where something strange happened in the customer's database (caused by bugs in our app and also sometimes admins who mess with the database). To figure out what is wrong with the data, we have to send SQL scripts to the customer and tell them how to run them on the database server. Then, once we know how to fix it, we have to send another script to repair the data. Is there a secure way to add a page in our application that allows an application admin to enter SQL scripts that read and write directly to the database? Our support team could use that to help customers run these scripts, without needing direct access to the SQL server. My big concerns are that someone might abuse this power to get data they shouldn't have and maybe to erase or modify data that they shouldn't be able to modify. I'm not worried about system admins, because they could find another way to do the same thing. But what if someone else got access to the form? Is there any way to do this kind of thing securely?

    Read the article

  • How to program something with the expectation that it will work the first time?

    - by Peter Turner
    I had a friend in college who programmed something that worked the first time, that was pretty amazing. But as for me, I just fire up the debugger as soon as I finally get whatever I'm working on to compile - saves me time (kidding of course, I sometimes hold out a little bit of hope or use a lot of premeditated debug strings). What's the best way to approach the Dijkstrain ideal for our programs? -or- Is this just some sort of pie-in-the-sky old fools quest for greatness applicable only to finite tasks that no one should hope for in our professional lives because programming is just too complex?

    Read the article

  • It's hard to judge your own product. Is there a name for this issue?

    - by Epicmaster
    I was just talking to my partner about how hard it is to personally judge how good your product is after a while because you use it so often. You literally spend hours on your computer doing nothing but work on this Consumer Facing application, and you start to feel a little fatigue of using it over and over and over, at least a hundred times a day. You get scared this fatigue may mean the product you are building may have the same effect on the users and might mean you are doing something wrong. Is there a name for this in product development? For the fact that as a designer+ programmer+everything else, your product might not suck as much as you think simply because you spend way too much time with it, or a variation of this?

    Read the article

  • "Viral" license that only blocks legal actions of user and developer against each other

    - by Lukasz Zaroda
    I was thinking a lot about software licensing lately, because I would like to do some coding. I'm not an expert in all those licenses, so I came up with my own idea, and before I will put in on paper, I would like to make sure that I didn't reinvent a wheel, so maybe I would be able to use something that exists. Main idea behind my license is to guarantee freedom of use the software, but not "freedom to" (positive) (e.g. freedom to having source code), but "freedom from" (negative) (strictly from legal actions against you). It would be "viral" copyleft license. You would be able to without fear do everything you want with the software (and binaries e.g. reverse engineering), as long as You will include information about author and/or authors, and all derivative works will be distributed with the same license. I'm not interested in anything that would restrict a freedom of company to do something like "tivoization". I'm just trying to accomplish something that would block any legal actions of user and developer, targeted against each other, with the exception of basic attribution. Does exist something like that?

    Read the article

< Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >