Search Results

Search found 15103 results on 605 pages for 'programmers notepad'.

Page 196/605 | < Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >

  • What is the supposed productivity gain of dynamic typing?

    - by hstoerr
    I often heard the claim that dynamically typed languages are more productive than statically typed languages. What are the reasons for this claim? Isn't it just tooling with modern concepts like convention over configuration, the use of functional programming, advanced programming models and use of consistent abstractions? Admittedly there is less clutter because the (for instance in Java) often redundant type declarations are not needed, but you can also omit most type declarations in statically typed languages that usw type inference, without loosing the other advantages of static typing. And all of this is available for modern statically typed languages like Scala as well. So: what is there to say for productivity with dynamic typing that really is an advantage of the type model itself?

    Read the article

  • Audio Panning using RtAudio

    - by user1801724
    I use the RtAudio library. I would like to implement an audio program where I can control the panning (e.g. shifting the sound from the left channel to the right channel). In my specific case, I use RtAudio in duplex mode (you can find an example here: duplex mode). It means that I link the microphone input to the speaker output. I have searched on the web, but I did not find anything useful. Should I apply a filter on the output buffer? What kind of filter?

    Read the article

  • How broad should a computer science/engineering student go?

    - by AskQuestions
    I have less than 2 years of college left and I still don't know what to focus on. But this is not about me, this is about being a future developer. I realize that questions like "Which language should I learn next?" are not really popular, but I think my question is broader than that. I often see people write things like "You have to learn many different things. Being a developer is not about learning one programming language / technology and then doing that for the rest of your life". Well, sure, but it's impossible to really learn everything thoroughly. Does that mean that one should just learn the basics of everything and then learn some things more thoroughly AFTER getting a particular job? I mean, the best way to learn programming is by actually programming stuff... But projects take time. Does an average developer really switch between (for example) being a web developer, doing artificial intelligence and machine learning related stuff and programming close to the hardware? I mean, I know a lot of different things, but I don't feel proficient in any of those things. If I want to find a job as a web developer (that's just an example) after I finish college, shouldn't I do some web related project (maybe using something I still don't know) rather than try to learn functional programming? So, the question is: How broad should a computer science student's field of focus be? One programming language is surely far too narrow, but what is too broad?

    Read the article

  • Can an agile shop every really score 12 on the Joel Test?

    - by Simon
    I really like the Joel test, use it myself, and encourage my staff and interviewees to consider it carefully. However I don't think I can ever score more than 9 because a few points seem to contradict the Agile Manifesto, XP and TDD, which are the bedrocks of my world. Specifically the questions about schedule, specs, testers and quiet working conditions run counter to what we are trying to create and the values that we have adopted in being genuinely agile. So my question is whether it is possible for a true Agile shop to score 12?

    Read the article

  • Design Pattern for Complex Data Modeling

    - by Aaron Hayman
    I'm developing a program that has a SQL database as a backing store. As a very broad description, the program itself allows a user to generate records in any number of user-defined tables and make connections between them. As for specs: Any record generated must be able to be connected to any other record in any other user table (excluding itself...the record, not the table). These "connections" are directional, and the list of connections a record has is user ordered. Moreover, a record must "know" of connections made from it to others as well as connections made to it from others. The connections are kind of the point of this program, so there is a strong possibility that the number of connections made is very high, especially if the user is using the software as intended. A record's field can also include aggregate information from it's connections (like obtaining average, sum, etc) that must be updated on change from another record it's connected to. To conserve memory, only relevant information must be loaded at any one time (can't load the entire database in memory at load and go from there). I cannot assume the backing store is local. Right now it is, but eventually this program will include syncing to a remote db. Neither the user tables, connections or records are known at design time as they are user generated. I've spent a lot of time trying to figure out how to design the backing store and the object model to best fit these specs. In my first design attempt on this, I had one object managing all a table's records and connections. I attempted this first because it kept the memory footprint smaller (records and connections were simple dicts), but maintaining aggregate and link information between tables became....onerous (ie...a huge spaghettified mess). Tracing dependencies using this method almost became impossible. Instead, I've settled on a distributed graph model where each record and connection is 'aware' of what's around it by managing it own data and connections to other records. Doing this increases my memory footprint but also let me create a faulting system so connections/records aren't loaded into memory until they're needed. It's also much easier to code: trace dependencies, eliminate cycling recursive updates, etc. My biggest problem is storing/loading the connections. I'm not happy with any of my current solutions/ideas so I wanted to ask and see if anybody else has any ideas of how this should be structured. Connections are fairly simple. They contain: fromRecordID, fromTableID, fromRecordOrder, toRecordID, toTableID, toRecordOrder. Here's what I've come up with so far: Store all the connections in one big table. If I do this, either I load all connections at once (one big db call) or make a call every time a user table is loaded. The big issue here: the size of the connections table has the potential to be huge, and I'm afraid it would slow things down. Store in separate tables all the outgoing connections for each user table. This is probably the worst idea I've had. Now my connections are 'spread out' over multiple tables (one for each user table), which means I have to make a separate DB called to each table (or make a huge join) just to find all the incoming connections for a particular user table. I've avoided making "one big ass table", but I'm not sure the cost is worth it. Store in separate tables all outgoing AND incoming connections for each user table (using a flag to distinguish between incoming vs outgoing). This is the idea I'm leaning towards, but it will essentially double the total DB storage for all the connections (as each connection will be stored in two tables). It also means I have to make sure connection information is kept in sync in both places. This is obviously not ideal but it does mean that when I load a user table, I only need to load one 'connection' table and have all the information I need. This also presents a separate problem, that of connection object creation. Since each user table has a list of all connections, there are two opportunities for a connection object to be made. However, connections objects (designed to facilitate communication between records) should only be created once. This means I'll have to devise a common caching/factory object to make sure only one connection object is made per connection. Does anybody have any ideas of a better way to do this? Once I've committed to a particular design pattern I'm pretty much stuck with it, so I want to make sure I've come up with the best one possible.

    Read the article

  • Why don't research papers that mention custom software release the source code?

    - by Antoine
    Is there a reason why the source code of softwares mentioned in research papers is not released ? I understand that research papers are more about the general idea of accomplishing something than implementation details, but I don't get why they don't release the code. For example, in this paper ends with: Results The human line drawing system is implemented through the Qt framework in C++ using OpenGL, and runs on a 2.00 GHz Intel dual core processor workstation without any additional hardware assistance. We can interactively draw lines while the system synthesizes the new path and texture. Do they keep the source code closed intentionally because of a monetization they intend to make with it, or because of copyright ?

    Read the article

  • Is it possible to make a MS-SQL Scalar function do this?

    - by Hokken
    I have a 3rd party application that can call a MS-SQL scalar function to return some summary information from a table. I was able to return the summary values with a table-valued function, but the application won't utilize table-valued functions, so I'm kind of stuck. Here's a sample from the table: trackingNumber, projTaskAward, expenditureType, amount 1122, 12345-67-89, Supplies, 100 1122, 12345-67-89, Supplies, 150 1122, 12345-67-89, Supplies, 250 1122, 12345-67-89, Misc, 50 1122, 12345-67-89, Misc, 100 1122, 98765-43-21, General, 200 1122, 98765-43-21, Conference, 500 1122, 98765-43-21, Misc, 300 1122, 98765-43-21, Misc, 100 1122, 98765-43-21, Misc, 100 I want to summarize the amounts by projTaskAward & expenditureType, based on the trackingNumber. Here is the output I'm looking for: Proj/Task/Award: 12345-67-89 Expenditure Type: Supplies Total: 500 Proj/Task/Award: 12345-67-89 Expenditure Type: Misc Total: 150 Proj/Task/Award: 98765-43-21 Expenditure Type: General Total: 200 Proj/Task/Award: 98765-43-21 Expenditure Type: Conference Total: 500 Proj/Task/Award: 98765-43-21 Expenditure Type: Misc Total: 500 I'd appreciate any help anyone can give in steering me in the right direction.

    Read the article

  • Is there a more intelligent way to do this besides a long chain of if statements or switch?

    - by Harrison Nguyen
    I'm implementing an IRC bot that receives a message and I'm checking that message to determine which functions to call. Is there a more clever way of doing this? It seems like it'd quickly get out of hand after I got up to like 20 commands. Perhaps there's a better way to abstract this? public void onMessage(String channel, String sender, String login, String hostname, String message){ if (message.equalsIgnoreCase(".np")){ // TODO: Use Last.fm API to find the now playing } else if (message.toLowerCase().startsWith(".register")) { cmd.registerLastNick(channel, sender, message); } else if (message.toLowerCase().startsWith("give us a countdown")) { cmd.countdown(channel, message); } else if (message.toLowerCase().startsWith("remember am routine")) { cmd.updateAmRoutine(channel, message, sender); } }

    Read the article

  • How can one manage thousands of IF...THEN...ELSE rules?

    - by David
    I am considering building an application, which, at its core, would consist of thousands of if...then...else statements. The purpose of the application is to be able to predict how cows move around in any landscape. They are affected by things like the sun, wind, food source, sudden events etc. How can such an application be managed? I imagine that after a few hundred IF-statements, it would be as good as unpredictable how the program would react and debugging what lead to a certain reaction would mean that one would have to traverse the whole IF-statement tree every time. I have read a bit about rules engines, but I do not see how they would get around this complexity.

    Read the article

  • How to design a scriptable communication emulator?

    - by Hawk
    Requirement: We need a tool that simulates a hardware device that communicates via RS232 or TCP/IP to allow us to test our main application which will communicate with the device. Current flow: User loads script Parse script into commands User runs script Execute commands Script / commands (simplified for discussion): Connect RS232 = RS232ConnectCommand Connect TCP/IP = TcpIpConnectCommand Send data = SendCommand Receive data = ReceiveCommand Disconnect = DisconnectCommand All commands implement the ICommand interface. The command runner simply executes a sequence of ICommand implementations sequentially thus ICommand must have an Execute exposure, pseudo code: void Execute(ICommunicator context) The Execute method takes a context argument which allows the command implementations to execute what they need to do. For instance SendCommand will call context.Send, etc. The problem RS232ConnectCommand and TcpIpConnectCommand needs to instantiate the context to be used by subsequent commands. How do you handle this elegantly? Solution 1: Change ICommand Execute method to: ICommunicator Execute(ICommunicator context) While it will work it seems like a code smell. All commands now need to return the context which for all commands except the connection ones will be the same context that is passed in. Solution 2: Create an ICommunicatorWrapper (ICommunicationBroker?) which follows the decorator pattern and decorates ICommunicator. It introduces a new exposure: void SetCommunicator(ICommunicator communicator) And ICommand is changed to use the wrapper: void Execute(ICommunicationWrapper context) Seems like a cleaner solution. Question Is this a good design? Am I on the right track?

    Read the article

  • Decoupling software components via naming convention

    - by csteinmueller
    I'm currently evaluating alternatives to refactor a drivermanagement. In my multitier architecture I have Baseclass DAL.Device //my entity Interfaces BL.IDriver //handles the dataprocessing between application and device BL.IDriverCreator //creates an IDriver from a Device BL.IDriverFactory //handles the driver creation requests Every specialization of Device has a corresponding IDriver implementation and a corresponding IDriverCreator implementation. At the moment the mapping is fix via a type check within the business layer / DriverFactory. That means every new driver needs a) changing code within the DriverFactory and b) referencing the new IDriver implementation / assembly. On a customers point of view that means, every new driver, used or not, needs a complex revalidation of their hardware environment, because it's a critical process. My first inspiration was to use a caliburn micro like nameconvention see Caliburn.Micro: Xaml Made Easy BL.RestDriver BL.RestDriverCreator DAL.RestDevice After receiving the RestDevicewithin the IDriverFactory I can load all driver dlls via reflection and do a namesplitting/comparing (extracting the xx from xxDriverCreator and xxDevice) Another idea would be a custom attribute (which also leads to comparing strings). My question: is that a good approach above layer borders? If not, what would be a good approach?

    Read the article

  • Should one bind data with Eval on aspx or override ItemDataBound in code-behind?

    - by George Chang
    For data bound controls (Repeater, ListView, GridView, etc.), what's the preferred way of binding data? I've seen it where people use Eval() directly on the aspx/ascx inside the data bound control to pull the data field, but to me, it just seems so...inelegant. It seems particularly inelegant when the data needs to be manipulated so you wind up with shim methods like <%# FormatMyData(DataBinder.Eval(Container.DataItem, "DataField")) %> inside your control. Personally, I prefer to put in Literal controls (or other appropriate controls) and attach to the OnItemDataBound event for the control and populate all the data to their appropriate fields in the code-behind. Are there any advantages of doing one over the other? I prefer the latter, because to me it makes sense to compartmentalize the data binding logic and the presentation layer. But maybe that's just me.

    Read the article

  • Game Trees Conceptual Question

    - by Chris Corbin
    I am struggling to conceptually understand a question in a programming assignment for an algorithms class. The problem is dealing with a fictitious 2 player game, named Easy. The rules of the game are simple; each player may chose one of 4 integers {0-3} after which that integer is not available for the other player. The catch is, a player picks {0} it means they quit. The objective is for Player 1 to get {1} and Player 2 to get {2}, in which case they may win, however if both or neither succeed, then the game ends in a draw. I have been asked to draw the game tree for Easy, showing all nodes, which they explained as 4! = 24. Labeling the edges, which represent moves (selecting a number) and the leaves with who won (1 means Player 1 won, -1 means Player 2 won, and 0 means a tie). I have drawn out a game tree, which I believe is correct, however I am not 100% certain hence I am asking the question. My game tree only has 16 leaves. I am thinking that when a player picks {0}, and then quits, the game tree stops there? I don't see how it is possible to get to 24 leaves? Any help would be greatly appreciate, and if you need more information I would be happy to provide it. Thanks

    Read the article

  • Programming in academic environment vs industry environment [closed]

    - by user200340
    Possible Duplicate: Differences between programming in school vs programming in industry? This is a general discussion about programming in the industry environment. The background story is that my colleague sent me a very interesting article called "10 Things Entrepreneurs Don’t Learn in College." The first point in that post is about the author's experience of programming in the academic environment vs industry environment. After finishing a 4 year Computer Science degree course, I am currently working in the academic environment as a developer, mainly writing Java, J2EE, Javascript code. I know there are differences between academic programming and industry programming, but I was shocked after reading that post. Trying to avoid this happening on me in the future, or the others. Can anyone from industry give some general advice about how to program in industry. For example, What exactly happens when a task is received? What is the flow from the beginning to the end? What are the main differences between the programming in industry and academia? Is it more structured? Are more frameworks used? It would be great if some code examples could be given. Thanks.

    Read the article

  • Software Life-cycle of Hacking

    - by David Kaczynski
    At my local university, there is a small student computing club of about 20 students. The club has several small teams with specific areas of focus, such as mobile development, robotics, game development, and hacking / security. I am introducing some basic agile development concepts to a couple of the teams, such as user stories, estimating complexity of tasks, and continuous integration for version control and automated builds/testing. I am familiar with some basic development life-cycles, such as waterfall, spiral, RUP, agile, etc., but I am wondering if there is such a thing as a software development life-cycle for hacking / breaching security. Surely, hackers are writing computer code, but what is the life-cycle of that code? I don't think that they would be too concerned with maintenance, as once the breach has been found and patched, the code that exploited that breach is useless. I imagine the life-cycle would be something like: Find gap in security Exploit gap in security Procure payload Utilize payload I propose the following questions: What kind of formal definitions (if any) are there for the development life-cycle of software when the purpose of the product is to breach security?

    Read the article

  • Understanding Visitor Pattern

    - by Nezreli
    I have a hierarchy of classes that represents GUI controls. Something like this: Control-ContainerControl-Form I have to implement a series of algoritms that work with objects doing various stuff and I'm thinking that Visitor pattern would be the cleanest solution. Let take for example an algorithm which creates a Xml representaion of a hierarchy of objects. Using 'classic' approach I would do this: public abstract class Control { public virtual XmlElement ToXML(XmlDocument document) { XmlElement xml = document.CreateElement(this.GetType().Name); // Create element, fill it with attributes declared with control return xml; } } public abstract class ContainerControl : Control { public override XmlElement ToXML(XmlDocument document) { XmlElement xml = base.ToXML(document); // Use forech to fill XmlElement with child XmlElements return xml; } } public class Form : ContainerControl { public override XmlElement ToXML(XmlDocument document) { XmlElement xml = base.ToXML(document); // Fill remaining elements declared in Form class return xml; } } But I'm not sure how to do this with visitor pattern. This is the basic implementation: public class ToXmlVisitor : IVisitor { public void Visit(Form form) { } } Since even the abstract classes help with implementation I'm not sure how to do that properly in ToXmlVisitor. Perhaps there is a better solution to this problem. The reason that I'm considering Visitor pattern is that some algorithms will need references not available in project where the classes are implemented and there is a number of different algorithms so I'm avoiding large classes. Any thoughts are welcome.

    Read the article

  • How do you get aware of new tools?

    - by Konstantin Petrukhnov
    How do you get aware of new tools (libraries, applications, etc)? This question is only about "getting aware" that some tool exist and could be used. Learning and trying is different issue. Right now I get most awareness from stackexchange and freshmeat sites. But I wonder if there are more efficient way. E.g. 80% of freshmeat projects are no-use for me, but it reasonable overhead, because tools that I find trough it save me days or even weeks. Here are some related, but a bit different questions: How much time do you invest in exploring new technology? How to become aware of new languages, techniques and methodologies? What website are you using most to keep you updated on software development?

    Read the article

  • C# String.format extension method

    - by Paul Roe
    With the addtion of Extension methods to C# we've seen a lot of them crop up in our group. One debate revolves around extension methods like this one: public static class StringExt { /// <summary> /// Shortcut for string.Format. /// </summary> /// <param name="str"></param> /// <param name="args"></param> /// <returns></returns> public static string Format(this string str, params object[] args) { if (str == null) return null; return string.Format(str, args); } } Does this extension method break any programming best practices that you can name? Would you use it anyway, if not why? If I renamed the function to "F" but left the xml comments would that be epic fail or just a wonderful savings of keystrokes?

    Read the article

  • What is Mozilla's new release management strategy ?

    - by RonK
    I saw today that FireFox released a new version (5). I tried reading about what was added and ran into this link: http://arstechnica.com/open-source/news/2011/06/firefox-5-released-arrives-only-three-months-after-firefox-4.ars It states that: Mozilla has launched Firefox 5, a new version of the popular open source Web browser. This is the first update that Mozilla has issued since adopting a new release management strategy that has drastically shortened the Firefox development cycle. I find this very intriguing - any idea what this new strategy is?

    Read the article

  • Ruby on Rails background API polling

    - by Matthew Turney
    I need to integrate a free/busy calendar integration with Zimbra. Unlike outlook, it seems, Zimbra requires polling their API. I need to be able to grab the free/busy data in background tasks for 10's of thousands of users on a regular time interval, preferably every few minutes. What would be the best way to implement this in a Rails application without bogging down our current resque tasks? I have considered moving this process to something like node.js or something similar in Ruby. The biggest problem is that we have no control over the IO, as each clients Zimbra instances could be slow and we don't want to create a huge backup in tasks. Thoughts and ideas?

    Read the article

  • What do you call "X <= $foo <= Y" comparison?

    - by Jakob
    While writing a Perl statement like if ( $foo >= X && $foo <= Y ) yet again, I wondered why many programming languages do not support the more comfortable form if ( X <= $foo <= Y ) and what this is called. I came up with "3-legged comparison" but no results when searching for it. By the way there is also the "element-of-set" form if ( $foo in X..Y ) which I only consider more readable when provided via a short keyword. Is there a term for X <= $foo <= Y comparison? Which languages support it?

    Read the article

  • How to shorten brain context switch delay when need to use new technology\framework?

    - by gasan
    The problem is when I have to deal with a new framework\library\language it completely slows my work process, at first it's kind of shock, you're sitting on your place about a day doing nothing surfing the net, because you simply can't do anything even read docs, then, on the second day I realize that I definitely should do something and starting read about it, then I realize that I don't understand it, then I'm reading until I got feeling that I should show some results immediately and then I'm writing the code quite fast and the job doesn't seem to be difficult. Then job is done and I won't probably return to that technology\framework for a month or a year or never at all. And I will almost certainly forget almost everything about it after a month. To illustrate by checkpoints I experience: shock, long studying times, work with the new tech briefly, never use it afterwards, then I completely forget it. So what would be the solution here?

    Read the article

  • Why Use !boolean_variable Over boolean_variable == false

    - by ell
    A comment on this question: Calling A Method that returns a boolean value inside a conditional statement says that you should use !boolean instead of boolean == false when testing conditions. Why? To me boolean == false is much more natural in English and is more explicit. I apologise if this is just a matter of style, but I was wondering if there was some other reason for this preference of !boolean?

    Read the article

  • Is there an appropriate coding style for implementing an algorithm during an interview?

    - by GlenPeterson
    I failed an interview question in C years ago about converting hex to decimal by not exploiting the ASCII table if (inputDigitByte > 9) hex = inputDigitByte - 'a'. The rise of Unicode has made this question pretty silly, but the point was that the interviewer valued raw execution speed above readability and error handling. They tell you to review algorithms textbooks to prepare for these interviews, yet these same textbooks tend to favor the implementation with the fewest lines of code, even if it has to rely on magic numbers (like "infinity") and a slower, more memory-intensive implementation (like a linked list instead of an array) to do that. I don't know what is right. Coding an algorithm within the space of an interview has at least 3 constraints: time to code, elegance/readability, and efficiency of execution. What trade-offs are appropriate for interview code? How much do you follow the textbook definition of an algorithm? Is it better to eliminate recursion, unroll loops, and use arrays for efficiency? Or is it better to use recursion and special values like "infinity" or Integer.MAX_VALUE to reduce the number of lines of code needed to write the algorithm? Interface: Make a very self-contained, bullet-proof interface, or sloppy and fast? On the one extreme, the array to be sorted might be a public static variable. On the other extreme, it might need to be passed to each method, allowing methods to be called individually from different threads for different purposes. Is it appropriate to use a linked-list data structure for items that are traversed in one direction vs. using arrays and doubling the size when the array is full? Implementing a singly-linked list during the interview is often much faster to code and easier remember for recursive algorithms like MergeSort. Thread safety - just document that it's unsafe, or say so verbally? How much should the interviewee be looking for opportunities for parallel processing? Is bit shifting appropriate? x / 2 or x >> 1 Polymorphism, type safety, and generics? Comments? Variable and method names: qs(a, p, q, r) vs: quickSort(theArray, minIdx, partIdx, maxIdx) How much should you use existing APIs? Obviously you can't use a java.util.HashMap to implement a hash-table, but what about using a java.util.List to accumulate your sorted results? Are there any guiding principals that would answer these and other questions, or is the guiding principal to ask the interviewer? Or maybe this should be the basis of a discussion while writing the code? If an interviewer can't or won't answer one of these questions, are there any tips for coaxing the information out of them?

    Read the article

  • Where does the term "Front End" come from?

    - by Richard JP Le Guen
    Where does the term "front-end" come from? Is there a particular presentation/talk/job-posting which is regarded as the first use of the term? Is someone credited with coining the term? The Merriam-Webster entry for "front-end" claims the first known use of the term was 1973 but it doesn't seem to provide details about that first known use. Likewise, the Wikipedia page about front and back ends is fairly low quality, and cites very few sources.

    Read the article

< Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >