Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 227/563 | < Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >

  • I need recommendations on free, open source, PHP-based business intelligence widget frameworks [on hold]

    - by Volomike
    I'm a PHP developer on Linux, and my manager wants a business intelligence dashboard. He wants to see in real-time our profit/loss stuff in fancy charts, based on our software sales. I could code it all from scratch and use Google Charts API or some other charts API to help me. However, I wanted to know if there was a free, open source, PHP-based business intelligence package out there, or some sort of widget framework that I could start with. That way, I can build the BI widgets inside that framework and not have to do everything from scratch. I apologize ahead of time if this is the wrong stackexchange where to place this query. I don't know where to place this query, and do want to follow the rules.

    Read the article

  • How is"cloud computing"different from "client-server"?

    - by BellevueBob
    Watching a CEO for a new "cloud computing" company describe his company on a finance TV program today, he said something like "Cloud computing is superior to old-fashioned client-server computing". Now I'm confused. Can someone please explain what "cloud computing" means in contrast to client-server? As far as I understand it, cloud computing is more of a network services model, such that I do not own or maintain the physical hardware. The "cloud" is all the back-end stuff. But I still might have an application that communicates with that "cloud" environment. And if I run a web site presents a form that a user fills out, pushes a button on the page, and returns some report that was generated by the web server, isn't that the same as "cloud" computing? And would you not consider my web browser as the "client"? Please note my question is specific to the concept of "cloud computing" with respect to "client-server". Sorry if this is an inappropriate question for this site; it's the one closest in the Stack universe and this is my first time here. I'm an old timer, programming since mainframe days in the late 70's.

    Read the article

  • Write this java source in Clojure

    - by tikky
    Need to write this code in clojure.... package com.example.testvaadin; import javax.servlet.ServletException; import javax.servlet.http.HttpServletRequest; import clojure.lang.RT; import com.vaadin.Application; import com.vaadin.terminal.gwt.server.AbstractApplicationServlet; public class Clojure4Vaadin extends AbstractApplicationServlet { @Override protected Class getApplicationClass()throws ClassNotFoundException { return Application.class; } @Override protected Application getNewApplication(HttpServletRequest request) throws ServletException { try { RT.load(getServletConfig().getInitParameter("script-name"), true); return (Application)RT.var(getServletConfig().getInitParameter("package-name"),getServletConfig().getInitParameter("function-name")).invoke(new String[0]); } catch (Exception e) { throw new ServletException(e); } } }

    Read the article

  • How many questions is it appropriate to ask as an intern?

    - by Casey Patton
    So, I just started an internship, and I'm worried that I'm asking too many questions. My mentor assigns me projects and helps me learn all the company's technologies and methodologies. However, there's so much new material for me to learn while doing this project that I have a lot of questions. I generally ask questions over instant messages or E-mail (those are the primary modes of communication for my company). I'm trying to be careful not to ask too many questions: I don't want to come off as annoying or dumb. How many questions are appropriate to ask? Once an hour? More? Less? Keep in mind, my mentor is also a fellow programmer who has his own responsibilities.

    Read the article

  • github team workflow - to fork or not?

    - by aporat
    We're a small team of web developers currently using subversion but soon we're making a switch to github. I'm looking at different types of github workflows, and we're not sure if the whole forking concept in github for each developer is such a good idea for us. If we use forks, I understand each developer will have his own private remote & local repositories. I'm worried it will make pushing changesets hard and too complex. Also, my biggest concern is that it will force each developer to have 2 remotes: origin (which is the remote fork) and an upstream (which is used to "sync" changes from the main repository). Not sure if it's such a easy way to do things. This is similar to the workflow explained here: https://github.com/usm-data-analysis/usm-data-analysis.github.com/wiki/Git-workflow If we don't use forks, we can probably get by fine by using a central repo creating a branch for each task we're working on, and merge them into the development branch on the same repository. It means we won't be able to restrict merging of branches and might be a little messy to have many branches on the central repository. Any suggestions from teams who tried both workflow?

    Read the article

  • Can I assume interface oriented programming as a good object oriented programming?

    - by david
    I have been programming for decades but I have not been used to object oriented programming. But for recenet years, I had a great opportunity to learn OOP, its principles, and a lot of patterns that are great. Since I've learned OOP, I tried to apply them to a couple of projects and found those projects successful. Unfortunately I didn't follow extreme programming that suggests writing test first, mainly because their time frame were tight. What I did for those projects were Identify all necessary classes and create them with proper properties and methods whenever there is dependency between classes, write interface between them see if there is any patterns for certain relationships between classes to replace By successful, I meant that it was quick development effort, the classes can be reused better, and flexible enough so that another programmer does not have to change something else to fix another part. But I wonder if this is a good practice. Of course, I know I need to put writing unit tests first in my work process. But other than that, is there any problem with this approach - creating lots of interfaces - in long term?

    Read the article

  • Does this BSD-like license achieve what I want it to?

    - by Joseph Szymborski
    I was wondering if this license is: self defeating just a clone of an existing, better established license practical any more "corporate-friendly" than the GPL too vague/open ended and finally, if there is a better license that achieves a similar effect? I wanted a license that would (in simple terms) be as flexible/simple as the "Simplified BSD" license (which is essentially the MIT license) allow anyone to make modifications as long as I'm attributed require that I get a notification that such a derived work exists require that I have access to the source code and be given license to use the code not oblige the author of the derivative work to have to release the source code to the general public not oblige the author of the derivative work to license the derivative work under a specific license Here is the proposed license, which is just the simplified BSD with a couple of additional clauses (all of which are bolded). Copyright (c) (year), (author) (email) All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. The copyright holder(s) must be notified of any redistributions of source code. The copyright holder(s) must be notified of any redistributions in binary form The copyright holder(s) must be granted access to the source code and/or the binary form of any redistribution upon the copyright holder's request. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

    Read the article

  • Are Intel compilers really better than Microsoft ones?

    - by Rocket Surgeon
    Years ago I was surprised when discovered that Intel sells Studio compatible compilers. I tried it in particular for C/C++ as well as fantastic diagnostic tools. But the code was simply not that computationally intensive to notice the difference. The only impression was: did Intel really did it for me just now, Wow, amazing tools with nanoseconds resolution, unbeleivable. But the trial ended and team never seriously considered a purchase. From your experience, if license cost does not matter, which vendor is a winner ? It is not broad or vague question or attemt to spark a holy war. This sort of question about 2 very visible tools. Nobody likes when tools have any mysteries or surprises. And choices between best and best are always the pain. I also understand the "grass greener" argument. I want to hear all "what ifs" stories. What if Intel just locally optimizes it for the chip stepping of the month, and not every hardware target will actually work as well as Microsoft compiled ? What if AMD hardware is the target and everything will slow down for no reason ? Or on other hand, what if Intel's hardware has so many unnoticable opportunities, that Microsoft compiler writers are too slow to adopt and never implement in the compiler ? What if both are the same exactly, actually a single codebase just wrapped into 2 different boxes and licensed to both vendors by some 3rd party shop? And so on. But someone knows some answers.

    Read the article

  • Create software without programming

    - by Hafizul Amri
    Is there any system or software that can let you create a system or software without have to do programming? UPDATE: Answering to Kennethvr question. Software that I mean is like a web-based software that used to create a simple CRUD system such contact management system, and you can choose how data will be displayed. UPDATE I have found PHPRunner software. It can create a simple CRUD system without you need to touch your coding. Take a look at it!

    Read the article

  • Handling extremely large numbers in a language which can't?

    - by Mallow
    I'm trying to think about how I would go about doing calculations on extremely large numbers (to infinitum - intergers no floats) if the language construct is incapable of handling numbers larger than a certain value. I am sure I am not the first nor the last to ask this question but the search terms I am using aren't giving me an algorithm to handle those situations. Rather most suggestions offer a language change or variable change, or talk about things that seem irrelevant to my search. So I need a little guideance. I would sketch out an algorithm like this: Determine the max length of the integer variable for the language. If a number is more than half the length of the max length of the variable split it in an array. (give a little play room) Array order [0] = the numbers most to the right [n-max] = numbers most to the left Ex. Num: 29392023 Array[0]:23, Array[1]: 20, array[2]: 39, array[3]:29 Since I established half the length of the variable as the mark off point I can then calculate the ones, tenths, hundredths, etc. Place via the halfway mark so that if a variable max length was 10 digits from 0 to 9999999999 then I know that by halfing that to five digits give me some play room. So if I add or multiply I can have a variable checker function that see that the sixth digit (from the right) of array[0] is the same place as the first digit (from the right) of array[1]. Dividing and subtracting have their own issues which I haven't thought about yet. I would like to know about the best implementations of supporting larger numbers than the program can.

    Read the article

  • C Programming matrix

    - by Bilal Khan
    In this program the user enters the # of columns of the matrix and then the entries of the matrix. So, for example, if the user enters 2 for column # and 1 2 3 4 for entries then the program develops a 2 by 2 matrix with 1 2 3 4 as entries. My program works perfectly in such a case. However, if the user for example had only entered 1 2 3 then my program makes a matrix with garbage values. I would like the program in such a case to exit the program. It is a simple question, but it has me baffled. #include<stdio.h> #include<stdlib.h> int main() { int m,x, n, c = 0, d,k, matrix[10][10], transpose[10][10], product[10][10]; printf("Enter the number of columns of matrix "); scanf("%d",&m); if(m<=0){ printf("You entered a invalid value."); exit(0); } else{ printf("Enter the elements of matrix \n"); for( c = 0 ; c < 10 ; c++ ) { for( d = 0 ; d < m ; d++ ) { scanf("%d",&matrix[c][d]); if (matrix[c][d] == 99) // 'x' is character variable I declared to use as a break break; // c = c+1; } if (matrix[c][d] == 99) break; } } printf("\nHere is your matrix:\n"); int i; for(i=0;i<c;i++) { for(d=0;d<m;d++) { printf("%3d ",matrix[i][d]); } printf("\n"); }

    Read the article

  • Java - What methods to put in an interface and what to keep out

    - by lewicki
    I'm designing a file handler interface: public interface FileHandler { public void openFileHandler(String fileName); public void closeFileHandler(); public String readLine(); public String [] parseLine(String line); public String [] checkLine(String line[]); public void incrementLineCount(); public void incrementLineSuccessCount(); public void incrementLineErrorCount(); public int getLineCount(); public int getLineSuccessCount(); public int getLineErrorCount(); } It is soon apparent to me that these methods can't be made private. I don't want incrementLineCount to be public. What is proper way to design an interface like this?

    Read the article

  • I have a stacktrace and limit of 250 characters for a bug report

    - by George Duckett
    I'm developing an xbox indie game and as a last-resort I have a try...catch encompassing everything. At this point if an exception is raised I can get the user to send me a message through the xbox however the limit is 250 characters. How can I get the most value out of my 250 characters? I don't want to do any encoding / compressing at least initially. Any solution to this problem could be compressed if needed as a second step anyway. I'm thinking of doing things like turning this: at EasyStorage.SaveDevice.VerifyIsReady() at EasyStorage.SaveDevice.Save(String containerName, String fileName) into this (drop the repeated namespace/class and method parameter names): at EasyStorage.SaveDevice.VerifyIsReady() at ..Save(String, String) Or maybe even just including the inner-most method, then only line numbers up the stack etc. TL;DR: Given an exception with a stacktrace how would you get the most useful debugging infromation out of 250 characters? (It will be a .net exception/stacktrace)

    Read the article

  • Established coding standards for pl/pgsql code

    - by jb01
    I need to standardize coding practises for project that compromises, among others has pl/pgsql database, that has some amount of nontrivial code. I look for: Code formatting guidelines, especially inside procedures. Guidelines on what constructs are consigered unsafe (if any) Naming coventions. Code documentation conventions (if this is pracicised) Any hints to documets that define good practises in pl/pgsql code? If not i'm looking for hints to practices that you consider good. There is related question regarding TSQL: Can anyone recommend coding standards for TSQL?, which is relevant to psql as well, but I need more information on stored procedures. Other related questions: http://stackoverflow.com/questions/1070275/what-indenting-style-do-you-use-in-sql-server-stored-procedures

    Read the article

  • What are the typical applications of Lisp macros?

    - by Giorgio
    I am trying to learn some LISP and I have read a lot about the importance of LISP macros so I would like to get some working experience with them. Can you suggest a practical application area that would allow me to use macros to solve a real-world problem, and to understand the usefulness of this programming construct? NOTE This is not a generic what project should I do next question. I am interested to understand which kinds of problems are typically solved by means of LISP macros. E.g., are they good for implementing abstract data types? Why was this construct added to the language? What kinds of problems does it solve that cannot be solved by means of simple functions?

    Read the article

  • Why does Clang/LLVM warn me about using default in a switch statement where all enumerated cases are covered?

    - by Thomas Catterall
    Consider the following enum and switch statement: typedef enum { MaskValueUno, MaskValueDos } testingMask; void myFunction(testingMask theMask) { switch theMask { case MaskValueUno: {}// deal with it case MaskValueDos: {}// deal with it default: {} //deal with an unexpected or uninitialized value } }; I'm an Objective-C programmer, but I've written this in pure C for a wider audience. Clang/LLVM 4.1 with -Weverything warns me at the default line: Default label in switch which covers all enumeration values Now, I can sort of see why this is there: in a perfect world, the only values entering in the argument theMask would be in the enum, so no default is necessary. But what if some hack comes along and throws an uninitialized int into my beautiful function? My function will be provided as a drop in library, and I have no control over what could go in there. Using default is a very neat way of handling this. Why do the LLVM gods deem this behaviour unworthy of their infernal device? Should I be preceding this by an if statement to check the argument?

    Read the article

  • Learning Zend Framework 1 or 2?

    - by ehijon
    I have programmed for a few years in php and now I'm going to learn zend framwork. Zend is very popular and there are a lot of tutorials, books and documentation out there. But I saw in the last months that there is a second version of Zend, but it's not so used and popular, not yet. I think it is better to start with a new version, but I don't know what to do now, as when I see job offers many people require the first version. Which version do you suggest me?

    Read the article

  • Is there a real difference between dynamic analysis and testing?

    - by user970696
    Often testing is regarded as a dynamic analysis of a software. Yet while writing my thesis, the reviewer noted to me that dynamic analysis is about analyzing the program behind the scenes - e.g. profiling and that it is not the same as testing because its "analysis" which looks inside and observes. I know that "static analysis" is not testing, should we then separate this "dynamic analysis" also from testing? Some books do refer to dynamic analysis in this sense. I would maybe say that testing is a one mean of dynamic analysis?

    Read the article

  • Delphi Client-Server Application using Firebird 2.5 error

    - by Japie Bosman
    I have got a lengthy question to ask. First of all Im still very new when it comes to Delphi programming and my experience has beem mostly developing small single user database applications using ADO and an Access database. I need to take the transition now to a client server application and this is where the problem starts. I decided to use Firebird 2.5 embeded as my database, as it is open source, and it is can be used with the interbase components in Delphi and that multiple clients can access the database simultanously. So I followed the interbase tutorial in Delphi. I managed to connect the client to the server and see the data in the example (While both are running on my pc), but when i tried to move the client to another pc, keeping the server on mine and running it to see if I can connect to the server it gave me the following error. Exception EIdSocketError in module clientDemo.exe at 0029DCAC. Socket Error # 10061 Connection refused. I understand that this might be because the host is defined as localhost in the client. But here is my first question. In the TSQLConncetion you can set die hostname under Driver-Hostname. The thing I want to know is how do you do this at run time, as I cannot get the property when I try and make an edit box to allow the user to enter the value and then set it via code like for example: SQLConncetion1.Driver.Hostname := edtHost.text; The thing is there is not such property to set, so how do you set the hostname at run time? Im using Delphi XE2 There is still a lot of questions to come especially when it comes to deployment, but I will take this piece by piece and I appreciate the advice.

    Read the article

  • learn the programming language for computing functions about integers

    - by asd
    Hi I know something about Pascal, Mathematica and Matlab, but I dont have any idea about C,C++,C# languages. I want to learn one of the languages that they they are fast and exact to compute some arithmetic functions for large numbers(for example larger than $10^3000$). I asked somebody and he said he used C++ and he said I computed this sequence in less than 10 min. I want to know C, C++, C# and visual kind of theses programs and know which is better for my goal. Let $f$ be an arithmetic function and A={k1,k2,...,kn} are integers in increasing order. Now I want to start with k1 and compare f(ki) with f(k1). If f(ki)f(k1), put ki as k1. Now start with ki, and compare f(kj) with f(ki), for ji. If f(kj)f(ki), put kj as ki, and repeat this procedure. At the end we will have a sub sequence B={L1,...,Lm} of A by this property: f(L(i+1))f(L(i)), for any 1<=i<=m-1 I have written a code for this program with Mathematica, and it take some hours to compute f of ki's or the set B for large numbers. For example, let f is the divisor function of integers. Do you know how to write the code for my purpose in Mathematica or Matlab. Mathematica is preferable.

    Read the article

  • How do I deal with code of bad quality contributed by a third party?

    - by lindelof
    I've recently been promoted into managing one of our most important projects. Most of the code in this project has been written by a partner of ours, not by ourselves. The code in question is of very questionable quality. Code duplication, global variables, 6-page long functions, hungarian notation, you name it. And it's in C. I want to do something about this problem, but I have very little leverage on our partner, especially since the code, for all its problems, "just works, doesn't it?". To make things worse, we're now nearing the end of this project and must ship soon. Our partner has committed a certain number of person-hours to this project and will not put in more hours. I would very much appreciate any advice or pointers you could give me on how to deal with this situation.

    Read the article

  • Where can I find programming work online ?

    - by explorest
    I have setup an ideal, quiet, non-interrupting environment at home. I am extremely productive here. I dont want to leave my home, not my room, not even my couch. How/where do I find work online so that I don't have to travel to it? Kindly post about your own personal experiences. Have you done it full time from home? Where and how? I am outside United States in a third world country so a lower pay is not an issue. The issue is the work-enviroment.

    Read the article

  • When creating an library published on CodePlex, how "bad" would it be for the unit-test projects to rely on commercial products?

    - by Lasse V. Karlsen
    I have started a project on CodePlex for a WebDAV server implementation for .NET, so that I can host a WebDAV server in my own programs. This is both a learning/research project (WebDAV + server portion) as well as a project I think I can have much fun with, both in terms of making it and using it. However, I see a need to do mocking of types here in order to unit-testing properly. For instance, I will be relying on HttpListener for the web server portion of the WebDAV server, and since this type has no interface, and is sealed, I cannot easily make mocks or stubs out of it. Unless I use something like TypeMock. So if I used TypeMock in the unit-test projects on this library, how bad would this be for potential users? The projects are made in C# 3.5 for .NET 3.5 and 4.0, and the project files was created with Visual Studio 2010 Professional. The actual class libraries you would end up referencing in your software would of course not be encumbered with anything remotely like this, only the unit-test libraries. What's your thoughts on this? As an example, I have in my old code-base, which is private, the ability to just initiate a WebDAV server with just this: var server = new WebDAVServer(); This constructs, and owns, a HttpListener instance internally, and I would like to verify through unit-tests that if I dispose of this server object, the internal listener is disposed of. If, on the other hand, I use the overload where I hand it a listener object, this object should not be disposed of. Short of exposing the internal listener object to the outside world, something I'm a bit loath to do, how can I in a good way ensure that the object was disposed of? With TypeMock I can mock away parts of this object even though it isn't accessed through interfaces. The alternative would be for me to wrap everything in wrapper classes, where I have complete control.

    Read the article

  • Usage of repository between EF model and code consumer

    - by jim
    I have binary data in my database that I'll have to convert to bitmap at some point. I was thinking whether or not it's appropriate to use a repository and do it there. My consumer, which is a presentation layer, will use this repository. For example: // This is a class I created for modeling the item as is. public class RealItem { public string Name { get; set; } public Bitmap Image { get; set; } } public abstract class BaseRepository { //using Unity (http://unity.codeplex.com) to inject the dependancy of entity context. [Dependency] public Context { get; set; } } public calss ItemRepository : BaseRepository { public List<Items> Select() { IEnumerable<Items> items = from item in Context.Items select item; List<RealItem> lst = new List<RealItem>(); foreach(itm in items) { MemoryStream stream = new MemoryStream(itm.Image); Bitmap image = (Bitmap)Image.FromStream(stream); RealItem ritem = new RealItem{ Name=item.Name, Image=image }; lst.Add(ritem); } return lst; } } Is this a correct way to use the repository pattern? I'm learning this pattern and I've seen a lot of examples online that are using a repository but when I looked at their source code... for example: public IQueryable<object> Select { return from q in base.Context select q; } as you can see no behavior is added to the system by their approach, so I was confused that maybe repository is something else and I got it all wrong. At the end there should be extra benifits of using them right?

    Read the article

  • Converting ANTLR AST to Java bytecode using ASM

    - by Nick
    I am currently trying to write my own compiler, targeting the JVM. I have completed the parsing step using Java classes generated by ANTLR, and have an AST of the source code to work from (An ANTLR "CommonTree", specifically). I am using ASM to simplify the generating of the bytecode. Could anyone give a broad overview of how to convert this AST to bytecode? My current strategy is to explore down the tree, generating different code depending on the current node (using "Tree.getType()"). The problem is that I can only recognise tokens from my lexer this way, rather than more complex patterns from the parser. Is there something I am missing, or am I simply approaching this wrong? Thanks in advance :)

    Read the article

< Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >