Search Results

Search found 32799 results on 1312 pages for 'programming process'.

Page 1/1312 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Enterprise Process Maps: A Process Picture worth a Million Words

    - by raul.goycoolea
    p { margin-bottom: 0.08in; }h1 { margin-top: 0.33in; margin-bottom: 0in; color: rgb(54, 95, 145); page-break-inside: avoid; }h1.western { font-family: "Cambria",serif; font-size: 14pt; }h1.cjk { font-family: "DejaVu Sans"; font-size: 14pt; }h1.ctl { font-size: 14pt; } Getting Started with Business Transformations A well-known proverb states that "A picture is worth a thousand words." In relation to Business Process Management (BPM), a credible analyst might have a few questions. What if the picture was taken from some particular angle, like directly overhead? What if it was taken from only an inch away or a mile away? What if the photographer did not focus the camera correctly? Does the value of the picture depend on who is looking at it? Enterprise Process Maps are analogous in this sense of relative value. Every BPM project (holistic BPM kick-off, enterprise system implementation, Service-oriented Architecture, business process transformation, corporate performance management, etc.) should be begin with a clear understanding of the business environment, from the biggest picture representations down to the lowest level required or desired for the particular project type, scope and objectives. The Enterprise Process Map serves as an entry point for the process architecture and is defined: the single highest level of process mapping for an organization. It is constructed and evaluated during the Strategy Phase of the Business Process Management Lifecycle. (see Figure 1) Fig. 1: Business Process Management Lifecycle Many organizations view such maps as visual abstractions, constructed for the single purpose of process categorization. This, in turn, results in a lesser focus on the inherent intricacies of the Enterprise Process view, which are explored in the course of this paper. With the main focus of a large scale process documentation effort usually underlying an ERP or other system implementation, it is common for the work to be driven by the desire to "get to the details," and to the type of modeling that will derive near-term tangible results. For instance, a project in American Pharmaceutical Company X is driven by the Director of IT. With 120+ systems in place, and a lack of standardized processes across the United States, he and the VP of IT have decided to embark on a long-term ERP implementation. At the forethought of both are questions, such as: How does my application architecture map to the business? What are each application's functionalities, and where do the business processes utilize them? Where can we retire legacy systems? Well-developed BPM methodologies prescribe numerous model types to capture such information and allow for thorough analysis in these areas. Process to application maps, Event Driven Process Chains, etc. provide this level of detail and facilitate the completion of such project-specific questions. These models and such analysis are appropriately carried out at a relatively low level of process detail. (see figure 2) Fig. 2: The Level Concept, Generic Process HierarchySome of the questions remaining are ones of documentation longevity, the continuation of BPM practice in the organization, process governance and ownership, process transparency and clarity in business process objectives and strategy. The Level Concept in Brief Figure 2 shows a generic, four-level process hierarchy depicting the breakdown of a "Process Area" into progressively more detailed process classifications. The number of levels and the names of these levels are flexible, and can be fit to the standards of the organization's chosen terminology or any other chosen reference model that makes logical sense for both short and long term process description. It is at Level 1 (in this case the Process Area level), that the Enterprise Process Map is created. This map and its contained objects become the foundation for a top-down approach to subsequent mapping, object relationship development, and analysis of the organization's processes and its supporting infrastructure. Additionally, this picture serves as a communication device, at an executive level, describing the design of the business in its service to a customer. It seems, then, imperative that the process development effort, and this map, start off on the right foot. Figuring out just what that right foot is, however, is critical and trend-setting in an evolving organization. Key Considerations Enterprise Process Maps are usually not as living and breathing as other process maps. Just as it would be an extremely difficult task to change the foundation of the Sears Tower or a city plan for the entire city of Chicago, the Enterprise Process view of an organization usually remains unchanged once developed (unless, of course, an organization is at a stage where it is capable of true, high-level process innovation). Regardless, the Enterprise Process map is a key first step, and one that must be taken in a precise way. What makes this groundwork solid depends on not only the materials used to construct it (process areas), but also the layout plan and knowledge base of what will be built (the entire process architecture). It seems reasonable that care and consideration are required to create this critical high level map... but what are the important factors? Does the process modeler need to worry about how many process areas there are? About who is looking at it? Should he only use the color pink because it's his boss' favorite color? Interestingly, and perhaps surprisingly, these are all valid considerations that may just require a bit of structure. Below are Three Key Factors to consider when building an Enterprise Process Map: Company Strategic Focus Process Categorization: Customer is Core End-to-end versus Functional Processes Company Strategic Focus As mentioned above, the Enterprise Process Map is created during the Strategy Phase of the Business Process Management Lifecycle. From Oracle Business Process Management methodology for business transformation, it is apparent that business processes exist for the purpose of achieving the strategic objectives of an organization. In a prescribed, top-down approach to process development, it must be ensured that each process fulfills its objectives, and in an aggregated manner, drives fulfillment of the strategic objectives of the company, whether for particular business segments or in a broader sense. This is a crucial point, as the strategic messages of the company must therefore resound in its process maps, in particular one that spans the processes of the complete business: the Enterprise Process Map. One simple example from Company X is shown below (see figure 3). Fig. 3: Company X Enterprise Process Map In reviewing Company X's Enterprise Process Map, one can immediately begin to understand the general strategic mindset of the organization. It shows that Company X is focused on its customers, defining 10 of its process areas belonging to customer-focused categories. Additionally, the organization views these end-customer-oriented process areas as part of customer-fulfilling value chains, while support process areas do not provide as much contiguous value. However, by including both support and strategic process categorizations, it becomes apparent that all processes are considered vital to the success of the customer-oriented focus processes. Below is an example from Company Y (see figure 4). Fig. 4: Company Y Enterprise Process Map Company Y, although also a customer-oriented company, sends a differently focused message with its depiction of the Enterprise Process Map. Along the top of the map is the company's product tree, overarching the process areas, which when executed deliver the products themselves. This indicates one strategic objective of excellence in product quality. Additionally, the view represents a less linear value chain, with strong overlaps of the various process areas. Marketing and quality management are seen as a key support processes, as they span the process lifecycle. Often, companies may incorporate graphics, logos and symbols representing customers and suppliers, and other objects to truly send the strategic message to the business. Other times, Enterprise Process Maps may show high level of responsibility to organizational units, or the application types that support the process areas. It is possible that hundreds of formats and focuses can be applied to an Enterprise Process Map. What is of vital importance, however, is which formats and focuses are chosen to truly represent the direction of the company, and serve as a driver for focusing the business on the strategic objectives set forth in that right. Process Categorization: Customer is Core In the previous two examples, processes were grouped using differing categories and techniques. Company X showed one support and three customer process categorizations using encompassing chevron objects; Customer Y achieved a less distinct categorization using a gradual color scheme. Either way, and in general, modeling of the process areas becomes even more valuable and easily understood within the context of business categorization, be it strategic or otherwise. But how one categorizes their processes is typically more complex than simply choosing object shapes and colors. Previously, it was stated that the ideal is a prescribed top-down approach to developing processes, to make certain linkages all the way back up to corporate strategy. But what about external influences? What forces push and pull corporate strategy? Industry maturity, product lifecycle, market profitability, competition, etc. can all drive the critical success factors of a particular business segment, or the company as a whole, in addition to previous corporate strategy. This may seem to be turning into a discussion of theory, but that is far from the case. In fact, in years of recent study and evolution of the way businesses operate, cross-industry and across the globe, one invariable has surfaced with such strength to make it undeniable in the game plan of any strategy fit for survival. That constant is the customer. Many of a company's critical success factors, in any business segment, relate to the customer: customer retention, satisfaction, loyalty, etc. Businesses serve customers, and so do a business's processes, mapped or unmapped. The most effective way to categorize processes is in a manner that visualizes convergence to what is core for a company. It is the value chain, beginning with the customer in mind, and ending with the fulfillment of that customer, that becomes the core or the centerpiece of the Enterprise Process Map. (See figure 5) Fig. 5: Company Z Enterprise Process Map Company Z has what may be viewed as several different perspectives or "cuts" baked into their Enterprise Process Map. It has divided its processes into three main categories (top, middle, and bottom) of Management Processes, the Core Value Chain and Supporting Processes. The Core category begins with Corporate Marketing (which contains the activities of beginning to engage customers) and ends with Customer Service Management. Within the value chain, this company has divided into the focus areas of their two primary business lines, Foods and Beverages. Does this mean that areas, such as Strategy, Information Management or Project Management are not as important as those in the Core category? No! In some cases, though, depending on the organization's understanding of high-level BPM concepts, use of category names, such as "Core," "Management" or "Support," can be a touchy subject. What is important to understand, is that no matter the nomenclature chosen, the Core processes are those that drive directly to customer value, Support processes are those which make the Core processes possible to execute, and Management Processes are those which steer and influence the Core. Some common terms for these three basic categorizations are Core, Customer Fulfillment, Customer Relationship Management, Governing, Controlling, Enabling, Support, etc. End-to-end versus Functional Processes Every high and low level of process: function, task, activity, process/work step (whatever an organization calls it), should add value to the flow of business in an organization. Suppose that within the process "Deliver package," there is a documented task titled "Stop for ice cream." It doesn't take a process expert to deduce the room for improvement. Though stopping for ice cream may create gain for the one person performing it, it likely benefits neither the organization nor, more importantly, the customer. In most cases, "Stop for ice cream" wouldn't make it past the first pass of To-Be process development. What would make the cut, however, would be a flow of tasks that, each having their own value add, build up to greater and greater levels of process objective. In this case, those tasks would combine to achieve a status of "package delivered." Figure 3 shows a simple example: Just as the package can only be delivered (outcome of the process) without first being retrieved, loaded, and the travel destination reached (outcomes of the process steps), some higher level of process "Play Practical Joke" (e.g., main process or process area) cannot be completed until a package is delivered. It seems that isolated or functionally separated processes, such as "Deliver Package" (shown in Figure 6), are necessary, but are always part of a bigger value chain. Each of these individual processes must be analyzed within the context of that value chain in order to ensure successful end-to-end process performance. For example, this company's "Create Joke Package" process could be operating flawlessly and efficiently, but if a joke is never developed, it cannot be created, so the end-to-end process breaks. Fig. 6: End to End Process Construction That being recognized, it is clear that processes must be viewed as end-to-end, customer-to-customer, and in the context of company strategy. But as can also be seen from the previous example, these vital end-to-end processes cannot be built without the functionally oriented building blocks. Without one, the other cannot be had, or at least not in a complete and organized fashion. As it turns out, but not discussed in depth here, the process modeling effort, BPM organizational development, and comprehensive coverage cannot be fully realized without a semi-functional, process-oriented approach. Then, an Enterprise Process Map should be concerned with both views, the building blocks, and access points to the business-critical end-to-end processes, which they construct. Without the functional building blocks, all streams of work needed for any business transformation would be lost mess of process disorganization. End-to-end views are essential for utilization in optimization in context, understanding customer impacts, base-lining all project phases and aligning objectives. Including both views on an Enterprise Process Map allows management to understand the functional orientation of the company's processes, while still providing access to end-to-end processes, which are most valuable to them. (See figures 7 and 8). Fig. 7: Simplified Enterprise Process Map with end-to-end Access Point The above examples show two unique ways to achieve a successful Enterprise Process Map. The first example is a simple map that shows a high level set of process areas and a separate section with the end-to-end processes of concern for the organization. This particular map is filtered to show just one vital end-to-end process for a project-specific focus. Fig. 8: Detailed Enterprise Process Map showing connected Functional Processes The second example shows a more complex arrangement and categorization of functional processes (the names of each process area has been removed). The end-to-end perspective is achieved at this level through the connections (interfaces at lower levels) between these functional process areas. An important point to note is that the organization of these two views of the Enterprise Process Map is dependent, in large part, on the orientation of its audience, and the complexity of the landscape at the highest level. If both are not apparent, the Enterprise Process Map is missing an opportunity to serve as a holistic, high-level view. Conclusion In the world of BPM, and specifically regarding Enterprise Process Maps, a picture can be worth as many words as the thought and effort that is put into it. Enterprise Process Maps alone cannot change an organization, but they serve more purposes than initially meet the eye, and therefore must be designed in a way that enables a BPM mindset, business process understanding and business transformation efforts. Every Enterprise Process Map will and should be different when looking across organizations. Its design will be driven by company strategy, a level of customer focus, and functional versus end-to-end orientations. This high-level description of the considerations of the Enterprise Process Maps is not a prescriptive "how to" guide. However, a company attempting to create one may not have the practical BPM experience to truly explore its options or impacts to the coming work of business process transformation. The biggest takeaway is that process modeling, at all levels, is a science and an art, and art is open to interpretation. It is critical that the modeler of the highest level of process mapping be a cognoscente of the message he is delivering and the factors at hand. Without sufficient focus on the design of the Enterprise Process Map, an entire BPM effort may suffer. For additional information please check: Oracle Business Process Management.

    Read the article

  • Imperative Programming v/s Declarative Programming v/s Functional Programming

    - by kaleidoscope
    Imperative Programming :: Imperative programming is a programming paradigm that describes computation in terms of statements that change a program state. In much the same way as the imperative mood in natural languages expresses commands to take action, imperative programs define sequences of commands for the computer to perform. The focus is on what steps the computer should take rather than what the computer will do (ex. C, C++, Java). Declarative Programming :: Declarative programming is a programming paradigm that expresses the logic of a computation without describing its control flow. It attempts to minimize or eliminate side effects by describing what the program should accomplish, rather than describing how to go about accomplishing it. The focus is on what the computer should do rather than how it should do it (ex. SQL). A  C# example of declarative v/s. imperative programming is LINQ. With imperative programming, you tell the compiler what you want to happen, step by step. For example, let's start with this collection, and choose the odd numbers: List<int> collection = new List<int> { 1, 2, 3, 4, 5 }; With imperative programming, we'd step through this, and decide what we want: List<int> results = new List<int>(); foreach(var num in collection) {     if (num % 2 != 0)           results.Add(num); } Here’s what we are doing: *Create a result collection *Step through each number in the collection *Check the number, if it's odd, add it to the results With declarative programming, on the other hand, we write the code that describes what you want, but not necessarily how to get it var results = collection.Where( num => num % 2 != 0); Here, we're saying "Give us everything where it's odd", not "Step through the collection. Check this item, if it's odd, add it to a result collection." Functional Programming :: Functional programming is a programming paradigm that treats computation as the evaluation of mathematical functions and avoids state and mutable data. It emphasizes the application of functions.Functional programming has its roots in the lambda calculus. It is a subset of declarative languages that has heavy focus on recursion. Functional programming can be a mind-bender, which is one reason why Lisp, Scheme, and Haskell have never really surpassed C, C++, Java and COBOL in commercial popularity. But there are benefits to the functional way. For one, if you can get the logic correct, functional programming requires orders of magnitude less code than imperative programming. That means fewer points of failure, less code to test, and a more productive (and, many would say, happier) programming life. As systems get bigger, this has become more and more important. To know more : http://stackoverflow.com/questions/602444/what-is-functional-declarative-and-imperative-programming http://msdn.microsoft.com/en-us/library/bb669144.aspx http://en.wikipedia.org/wiki/Imperative_programming   Technorati Tags: Ranjit,Imperative Programming,Declarative programming,Functional Programming

    Read the article

  • What's The Difference Between Imperative, Procedural and Structured Programming?

    - by daniels
    By researching around (books, Wikipedia, similar questions on SE, etc) I came to understand that Imperative programming is one of the major programming paradigms, where you describe a series of commands (or statements) for the computer to execute (so you pretty much order it to take specific actions, hence the name "imperative"). So far so good. Procedural programming, on the other hand, is a specific type (or subset) of Imperative programming, where you use procedures (i.e., functions) to describe the commands the computer should perform. First question: Is there an Imperative programming language which is not procedural? In other words, can you have Imperative programming without procedures? Update: This first question seems to be answered. A language CAN be imperative without being procedural or structured. An example is pure Assembly language. Then you also have Structured programming, which seems to be another type (or subset) of Imperative programming, which emerged to remove the reliance on the GOTO statement. Second question: What is the difference between procedural and structured programming? Can you have one without the other, and vice-versa? Can we say procedural programming is a subset of structured programming, as in the image?

    Read the article

  • How does process priority influence a process

    - by Luis Alvarado - The Wolverine
    Assuming we have read the following question: Change niceness (priority) of a running process and we know about root, non-root permissions: What actually happens when a running process (Through renice) or a new process (Through nice) gets its priority changed to a positive/negative value it previously had. Does it mean more memory is assign to it? Does more CPU power go to that particular process? Does it reduce any timing for resources for that process? What happens when the process priority change?

    Read the article

  • Modern programming language with intuitive concurrent programming abstractions

    - by faif
    I am interested in learning concurrent programming, focusing on the application/user level (not system programming). I am looking for a modern high level programming language that provides intuitive abstractions for writing concurrent applications. I want to focus on languages that increase productivity and hide the complexity of concurrent programming. To give some examples, I don't consider a good option writing multithreaded code in C, C++, or Java because IMHO my productivity is reduced and their programming model is not intuitive. On the other hand, languages that increase productivity and offer more intuitive abstractions such as Python and the multiprocessing module, Erlang, Clojure, Scala, etc. would be good options. What would you recommend based on your experience and why?

    Read the article

  • Advice on learning programming languages and math.

    - by Joris Ooms
    I feel like I'm getting stuck lately when it comes to learning about programming-related things; I thought I'd ask a question here and write it all down in the hope to get some pointers/advice from people. Perhaps writing it down helps me put things in perspective for myself aswell. I study Interactive Multimedia Design. This course is based on two things: graphic design on one hand, and web development on the other hand. I have quite a decent knowledge of web-related languages (the usual HTML/JS/PHP) and I'll be getting a course on ASP.NET next year. In my free time, I have learnt how to work with CodeIgniter, aswell as some diving into Ruby (and Rails) and basic iOS programming. In my first year of college I also did a class on Java (19/20 on the end result). This grade doesn't really mean anything though; I have the basics of OOP down but Java-wise, we learnt next to nothing. Considering the time I have been programming in, for example, PHP.. I can't say I'm bad at it. I'm definitely not good or great at it, but I'm decent. My teachers tell me I have the programming thing down. They just tell me I should keep on learning. So that's what I do, and I try to take in as much as possible; however, sometimes I'm unsure where to start and I have this tendency to always doubt myself. Now, for the 'question'. I want to get into iOS programming. I know iOS programming boils down to programming in Cocoa Touch and Objective-C. I also know Obj-C is a superset of C. I have done a class on C a couple of years ago, but I failed miserably. I got stuck at pointers and never really understood them.. Until like a month ago. I suddenly 'got' it. I have been working through a book on Objective-C for a week or so now, and I understand the basics (I'm at like.. chapter 6 or so). However, I keep running into similar problems as the ones I had when I did the C class: I suck at math. No, really. I come from a Latin-Modern Languages background in high school and I had nearly no math classes back then. I wanted to study Computer Science, but I failed there because of the miserable state of my mathematics knowledge. I can't explain why I'm suddenly talking about math here though, because it isn't directly related to programming.. yet it is. For example, the examples in the book I'm reading now are about programming a fraction-calculator. All good, I can do the programming when I get the formulas down.. but it takes me a full day or more to actually get to that point. I also find it hard to come up with ideas for myself. I made one small iOS app the other day and it's just a button / label kind of thing. When I press the button, it generates a random number. That's really all I could come up with. Can you 'learn' that? It probably comes down to creativity, but evidently, I'm not too great at being creative. Are there any sites or resources out there that provide something like a basic list of things you can program when you're just starting out? Maybe I'm focusing on too many things at once. I want to keep my HTML/CSS at a decent level, while learning PHP and CodeIgniter, while diving into Ruby on Rails and learning Objective-C and the iOS SDK at the same time. I just want to be good at something, I guess. The problem is that I can't seem to be happy with my PHP stuff. I want more, something 'harder'; that's why I decided to pick up the iOS thing. Like I said, I have the basics down of a lot of different languages. I can program something simple in Java, in C, in Objective-C as of this week.. but it ends there. Mostly because I can't come up with ideas for more complex applications, and also because I just doubt myself: 'Oh, that's too complex, I can never do that'. And then it ends there. To conclude my rant, let me basically rephrase my questions into a 'tl;dr' part. A. I want to get into iOS programming and I have basic knowledge of C/Objective-C. However, I struggle to come up with ideas of my own and implement them and I also suck at math which is something that isn't directly related to, yet often needed while programming. What can I do? B. I have an interest in a lot of different programming languages and I can't stop reading/learning. However, I don't feel like I'm good in anything. Should I perhaps focus on just one language for a year or longer, or keep taking it all in at the same time and hope I'll finally get them all down? C. Are there any resources out there that provide basic ideas of things I can program? I'm thinking about 'simple' command-line applications here to help me while studying C/Obj-C away from the whole iPhone SDK. Like I said, the examples in my book are mainly math-based (fraction calculator) and it's kinda hard. :( Thanks a lot for reading my post. I didn't plan it to be this long but oh well. Thanks in advance for any answers.

    Read the article

  • Introducing functional programming constructs in non-functional programming languages

    - by Giorgio
    This question has been going through my mind quite a lot lately and since I haven't found a convincing answer to it I would like to know if other users of this site have thought about it as well. In the recent years, even though OOP is still the most popular programming paradigm, functional programming is getting a lot of attention. I have only used OOP languages for my work (C++ and Java) but I am trying to learn some FP in my free time because I find it very interesting. So, I started learning Haskell three years ago and Scala last summer. I plan to learn some SML and Caml as well, and to brush up my (little) knowledge of Scheme. Well, a lot of plans (too ambitious?) but I hope I will find the time to learn at least the basics of FP during the next few years. What is important for me is how functional programming works and how / whether I can use it for some real projects. I have already developed small tools in Haskell. In spite of my strong interest for FP, I find it difficult to understand why functional programming constructs are being added to languages like C#, Java, C++, and so on. As a developer interested in FP, I find it more natural to use, say, Scala or Haskell, instead of waiting for the next FP feature to be added to my favourite non-FP language. In other words, why would I want to have only some FP in my originally non-FP language instead of looking for a language that has a better support for FP? For example, why should I be interested to have lambdas in Java if I can switch to Scala where I have much more FP concepts and access all the Java libraries anyway? Similarly: why do some FP in C# instead of using F# (to my knowledge, C# and F# can work together)? Java was designed to be OO. Fine. I can do OOP in Java (and I would like to keep using Java in that way). Scala was designed to support OOP + FP. Fine: I can use a mix of OOP and FP in Scala. Haskell was designed for FP: I can do FP in Haskell. If I need to tune the performance of a particular module, I can interface Haskell with some external routines in C. But why would I want to do OOP with just some basic FP in Java? So, my main point is: why are non-functional programming languages being extended with some functional concept? Shouldn't it be more comfortable (interesting, exciting, productive) to program in a language that has been designed from the very beginning to be functional or multi-paradigm? Don't different programming paradigms integrate better in a language that was designed for it than in a language in which one paradigm was only added later? The first explanation I could think of is that, since FP is a new concept (it isn't new at all, but it is new for many developers), it needs to be introduced gradually. However, I remember my switch from imperative to OOP: when I started to program in C++ (coming from Pascal and C) I really had to rethink the way in which I was coding, and to do it pretty fast. It was not gradual. So, this does not seem to be a good explanation to me. Or can it be that many non-FP programmers are not really interested in understanding and using functional programming, but they find it practically convenient to adopt certain FP-idioms in their non-FP language? IMPORTANT NOTE Just in case (because I have seen several language wars on this site): I mentioned the languages I know better, this question is in no way meant to start comparisons between different programming languages to decide which is better / worse. Also, I am not interested in a comparison of OOP versus FP (pros and cons). The point I am interested in is to understand why FP is being introduced one bit at a time into existing languages that were not designed for it even though there exist languages that were / are specifically designed to support FP.

    Read the article

  • Functional programming constructs in non-functional programming languages

    - by Giorgio
    This question has been going through my mind quite a lot lately and since I haven't found a convincing answer to it I would like to know if other users of this site have thought about it as well. In the recent years, even though OOP is still the most popular programming paradigm, functional programming is getting a lot of attention. I have only used OOP languages for my work (C++ and Java) but I am trying to learn some FP in my free time because I find it very interesting. So, I started learning Haskell three years ago and Scala last summer. I plan to learn some SML and Caml as well, and to brush up my (little) knowledge of Scheme. Well, a lot of plans (too ambitious?) but I hope I will find the time to learn at least the basics of FP during the next few years. What is important for me is how functional programming works and how / whether I can use it for some real projects. I have already developed small tools in Haskell. In spite of my strong interest for FP, I find it difficult to understand why functional programming constructs are being added to languages like C#, Java, C++, and so on. As a developer interested in FP, I find it more natural to use, say, Scala or Haskell, instead of waiting for the next FP feature to be added to my favourite non-FP language. In other words, why would I want to have only some FP in my originally non-FP language instead of looking for a language that has a better support for FP? For example, why should I be interested to have lambdas in Java if I can switch to Scala where I have much more FP concepts and access all the Java libraries anyway? Similarly: why do some FP in C# instead of using F# (to my knowledge, C# and F# can work together)? Java was designed to be OO. Fine. I can do OOP in Java (and I would like to keep using Java in that way). Scala was designed to support OOP + FP. Fine: I can use a mix of OOP and FP in Scala. Haskell was designed for FP: I can do FP in Haskell. If I need to tune the performance of a particular module, I can interface Haskell with some external routines in C. But why would I want to do OOP with just some basic FP in Java? So, my main point is: why are non-functional programming languages being extended with some functional concept? Shouldn't it be more comfortable (interesting, exciting, productive) to program in a language that has been designed from the very beginning to be functional or multi-paradigm? Don't different programming paradigms integrate better in a language that was designed for it than in a language in which one paradigm was only added later? The first explanation I could think of is that, since FP is a new concept (it isn't new at all, but it is new for many developers), it needs to be introduced gradually. However, I remember my switch from imperative to OOP: when I started to program in C++ (coming from Pascal and C) I really had to rethink the way in which I was coding, and to do it pretty fast. It was not gradual. So, this does not seem to be a good explanation to me. Also, I asked myself if my impression is just plainly wrong due to lack of knowledge. E.g., do C# and C++11 support FP as extensively as, say, Scala or Caml do? In this case, my question would be simply non-existent. Or can it be that many non-FP programmers are not really interested in using functional programming, but they find it practically convenient to adopt certain FP-idioms in their non-FP language? IMPORTANT NOTE Just in case (because I have seen several language wars on this site): I mentioned the languages I know better, this question is in no way meant to start comparisons between different programming languages to decide which is better / worse. Also, I am not interested in a comparison of OOP versus FP (pros and cons). The point I am interested in is to understand why FP is being introduced one bit at a time into existing languages that were not designed for it even though there exist languages that were / are specifically designed to support FP.

    Read the article

  • Modular programming is the method of programming small task or programs

    Modular programming is the method of programming small task or sub-programs that can be arranged in multiple variations to perform desired results. This methodology is great for preventing errors due to the fact that each task executes a specific process and can be debugged individually or within a larger program when combined with other tasks or sub programs. C# is a great example of how to implement modular programming because it allows for functions, methods, classes and objects to be use to create smaller sub programs. A program can be built from smaller pieces of code which saves development time and reduces the chance of errors because it is easier to test a small class or function for a simple solutions compared to testing a full program which has layers and layers of small programs working together.Yes, it is possible to write the same program using modular and non modular programming, but it is not recommend it. When you deal with non modular programs, they tend to contain a lot of spaghetti code which can be a pain to develop and not to mention debug especially if you did not write the code. In addition, in my experience they seem to have a lot more hidden bugs which waste debugging and development time. Modular programming methodology in comparision to non-mondular should be used when ever possible due to the use of small components. These small components allow business logic to be reused and is easier to maintain. From the user’s view point, they cannot really tell if the code is modular or not with today’s computers.

    Read the article

  • Difference between extensible programming and extendible programming?

    - by loudandclear
    What exactly is the different between "extensible programming" and "extendible programming?" Wikipedia states the following: The Lisp language community remained separate from the extensible language community, apparently because, as one researcher observed, any programming language in which programs and data are essentially interchangeable can be regarded as an extendible [sic] language. ... this can be seen very easily from the fact that Lisp has been used as an extendible language for years. If I'm understanding this correctly, it says "Lisp is extendible implies Lisp is not extensible". So what do these two terms mean, and how do they differ?

    Read the article

  • Why (not) logic programming?

    - by Anto
    I have not yet heard about any uses of a logical programming language (such as Prolog) in the software industry, nor do I know of usage of it in hobby programming or open source projects. It (Prolog) is used as an academic language to some extent, though (why is it used in academia?). This makes me wonder, why should you use logic programming, and why not? Why is it not getting any detectable industry usage?

    Read the article

  • How to make the transition to functional programming?

    - by tahatmat
    Lately, I have been very intrigued with F# which I have been working a bit with. Coming mostly from Java and C#, I like how concise and easily understandable it is. However, I believe that my background with these imperative languages disturb my way of thinking when programming in F#. I found a comparison of the imperative and functional approach, and I surely do recognize the "imperative way" of programming, but I also find it difficult to define problems to fit well with the functional approach. So my question is: How do I best make the transition from object-oriented programming to functional programming? Can you provide some tips or perhaps provide some literature that can help one to think "in functions" in general?

    Read the article

  • Programming Pearls (2nd Edition) vs More Programming Pearls: Confessions of a Coder [closed]

    - by Geek
    I have been reading very good reviews of the books by Jon Bentley : Programming Pearls (2nd Edition) More Programming Pearls: Confessions of a Coder. I know that these books have been out there for a long time and I feel bad that I haven't read either one . But it is always better late than never . I understand that the second one was written after the first one . So are these two books complementary to each other ? Do the second one assume that the reader has read the first one ? For some one who haven't read either which one would you propose to read up first ?

    Read the article

  • Should "closed as duplicate" software programming be extreme or functional? [migrated]

    - by Web Developer
    I'm a web developer loving this site for it's potential, and it's Coffee look . I was reading a great question, that is this: click here and noticed 8 moderators tagged it as DUPLICATED! The question was closed! Obviously it isn't and I'm going to explain why if needed but it can be seen: the question is unique, is the case/story of a young who have SPECIFIC experience with C++ , VB and Assembler and asking, knowing this specifications an answer (It is not a general question like "hey I'm young can I do the programmer??") Let me know your opinion! do you think this question should or should not be closed? And let's think about also the people not only the "data" and "cases covered" ... do you think this is important too? or is better to keep a place where people doesn't count?

    Read the article

  • Is there a process-oriented IDE ?

    - by Raveline
    My problem is simple : when I'm programming in an OO paradigm, I'm often having part of a main business process divided in many classes. Which means, if I want to examine the whole functional chain that leads to the output, for debugging or for optimization research, it can be a bit painful. So I was wondering : is there an IDE that let you put a "process tag" on functions coming from different objects, and give you a view of all those functions having the same tag ? edit : To give an example (that I'm making up completely, sorry if it doesn't sound very realistic). Let's say we have the following business process for a HR application : receive a holiday-request by an employee, check the validity of the request, then give an alert to his boss ("one of those lazy programmer wants another day off"); at the same time, let's say the boss will want to have a table of all employee's timetable during the time the employee wants his vacations; then handle the answer of the boss, send a nice little mail to the employee ("No way, lazy bones"). Even if we get rid of everything not purely business-related (mail sending process, db handling to get the useful info, GUI functionalities, and so on), we still have something that doesn't really fit in "one class". I'd like to have an IDE that would give me the opportunity to embrace quickly, at the very least : The function handling the validation of the request by the employee; The function preparing the "timetable" for the boss; The function handling the validation of the request by the boss; I wouldn't put all those functions in the same class (but perhaps that's my mistake ?). This is where my dreamed IDE could be helpful.

    Read the article

  • Technique to Solve Hard Programming logic

    - by Paresh Mayani
    I have heard about many techniques which are used by developer/software manager to solve hard programming logic or to create flow of an application and this flow will be implemented by developers to create an actual application. Some of the technique which i know, are: Flowchart Screen-Layout Data Flow Diagram E-R Diagram Algorithm of every programs I'd like to know about two facts: (1) Are there any techniques other than this ? (2) Which one is the most suitable to solve hard programming logic and process of application creation?

    Read the article

  • Have local admin privileges on Windows XP, but getting "Error terminating process: Access is denied"

    - by Chris W. Rea
    On one of the Windows XP machines I use regularly, there is a process that starts up periodically. I'd like to be able to kill the process – sometimes – because it occasionally runs when I'm busy doing something machine-intensive. I've already tried dropping the process priority to "Idle" to mitigate the effects, but it isn't the CPU that's the problem. Rather, the process is very disk-intensive and no matter the process priority, it still causes significant disk thrashing when running, impacting everything else I'm doing at the time. Using Process Explorer, I can find the process, right-click, and choose Kill Process, but I always get the message "Error terminating process: Access is denied." This is not an operating system process, but third-party software. What might that process be doing to prevent itself from being terminated? How can I kill such a process? Is there a way for me to modify the process's security or access control list (ACL) somewhere, using Process Explorer or another tool, so that I can effectively kill it?

    Read the article

  • Declarative programming vs. Imperative programming

    - by EpsilonVector
    I feel very comfortable with Imperative programming. I never have trouble expressing algorithmically what I want the computer to do once I figured out what is it that I want it to do. But when it comes to languages like SQL or Relational Algebra I often get stuck because my head is too used to Imperative programming. For example, suppose you have the relations band(bandName, bandCountry), venue(venueName, venueCountry), plays(bandName, venueName), and I want to write a query that says: all venueNames such that for every bandCountry there's a band from that country that plays in venue of that name. In my mind I immediately go "for each venueName iterate over all the bandCountries and for each bandCountry get the list of bands that come from it. If none of them play in venueName, go to next venueName. Else, at the end of the bandCountries iteration add venueName to the set of good venueNames". ...but you can't talk like that in SQL and I actually need to think about how to formulate this, with the intuitive Imperative solution constantly nagging in the back of my head. Did anybody else had this problem? How did you overcome this? Did you figured out a paradigm shift? Made a map from Imperative concepts to SQL concepts to translate Imperative solutions into Declarative ones? Read a good book? PS I'm not looking for a solution to the above query, I did solve it.

    Read the article

  • Which programming language should I learn? [on hold]

    - by Ashkan
    I'm Ashkan and I'm from Iran, I started programming when I was 13 and I learned a lot of stuff since then, But now I'm totally lost. Since I live in Iran there are no counselor or any professionals out there to help me, so I decided to ask here. I started with Visual Basic and after 1 year I started to learn HTML , CSS , Javascript and JQuery. And for the past 6 months I've been learning PHP,and I have a basic understanding of OOP. I want to move to America to continue my studies and I was wondering which programming language helps me the most to get there? Should I learn C++ or JAVA or should I study Computer Science and Math? also since We are not in a good place financially, I want a programming language that helps me in college and lets me make some money? Thanks in advance and sorry for my poor English skills.

    Read the article

  • Android programming vs iPhone Programming?

    - by geena
    Hi, I am doing my finol project and thinking of an mobile app to develop.but i am new to mobile OS world and dont know which is good for me to go on.I mean , in long term which will be more beneficial to me b/w android or iPhone programming as well as to my final project ? :) .......... Thanx for all the suggestions of you guyz :) Well I am, if not so bright, then pretty good at Java and C++ :) Although Objective C is a little different from standard C/C++ but I think I can cope with it. Owning a Mac or running Snow Leopard in VMWare is not going to make much difference in iOS development... or is it? Actually, as it is final project for my BS degree, I am wondering whether is it worth taking as a final project or not (iPhone or Android app)...Or.... Is it better to stick with web/desktop development? and what this means that i have to be a

    Read the article

  • Better Programming By Programming Better?

    - by ahmed
    I am not convinced by the idea that developers are either born with it or they are not. Where’s the empirical evidence to support these types of claims? Can a programmer move from say the 50th to 90th percentile? However, most developers are not in the 99th or even 90th percentile (by definition), and thus still have room for improvement in programming ability, along with the important skills.The belief in innate talent is “lacking in hard evidence to substantiate it” as well.So how do I reconcile these seemingly contradictory statements? I think the lesson for software developers who wish to keep on top of their game and become experts is to keep exercising the mind via effortful studying. I read a lot technical books, but many of them aren’t making me better as a developer.

    Read the article

  • How to deal with cargo-cult programming attitude?

    - by Aivar
    I have some students (in introductory programming course) who see programming language as a set of magic spells, which must be cast in order to achieve some effect (instead of seeing it as a flexible medium for expressing his idea of solution). They tend to copy-paste code from previous similarly sounding assignments without considering the essence of the problem. Can anyone recommend some exercises or analogies to make those students more confident that they can and should understand the structure and meaning of each piece of code they write?

    Read the article

  • Learning by doing (and programming by trial and error)

    - by AlexBottoni
    How do you learn a new platform/toolkit while producing working code and keeping your codebase clean? When I know what I can do with the underlying platform and toolkit, I usually do this: I create a new branch (with GIT, in my case) I write a few unit tests (with JUnit, for example) I write my code until it passes my tests So far, so good. The problem is that very often I do not know what I can do with the toolkit because it is brand new to me. I work as a consulant so I cannot have my preferred language/platform/toolkit. I have to cope with whatever the customer uses for the task at hand. Most often, I have to deal (often in a hurry) with a large toolkit that I know very little so I'm forced to "learn by doing" (actually, programming by "trial and error") and this makes me anxious. Please note that, at some point in the learning process, usually I already have: read one or more five-stars books followed one or more web tutorials (writing working code a line at a time) created a couple of small experimental projects with my IDE (IntelliJ IDEA, at the moment. I use Eclipse, Netbeans and others, as well.) Despite all my efforts, at this point usually I can just have a coarse understanding of the platform/toolkit I have to use. I cannot yet grasp each and every detail. This means that each and every new feature that involves some data preparation and some non-trivial algorithm is a pain to implement and requires a lot of trial-and-error. Unfortunately, working by trial-and-error is neither safe nor easy. Actually, this is the phase that makes me most anxious: experimenting with a new toolkit while producing working code and keeping my codebase clean. Usually, at this stage I cannot use the Eclipse Scrapbook because the code I have to write is already too large and complex for this small tool. In the same way, I cannot use any more an indipendent small project for my experiments because I need to try the new code in place. I can just write my code in place and rely on GIT for a safe bail-out. This makes me anxious because this kind of intertwined, half-ripe code can rapidly become incredibly hard to manage. How do you face this phase of the development process? How do you learn-by-doing without making a mess of your codebase? Any tips&tricks, best practice or something like that?

    Read the article

  • what is best book to learn optimized programming in java [closed]

    - by Abhishek Simon
    Possible Duplicate: Is there a canonical book for learning Java as an experienced developer? Let me elaborate a little: I used to be a C/C++ programmer where I used data structure concept like trees, queues stack etc and tried to optimize as much as possible, minimum no. of loops, variables and tried to make it efficient. It's been a couple of years that I started writing java codes, but it is simply not that efficient in terms of performance, memory intensive etc. To the point: I want to enter programming challenges using java so I need to improve my approach at things I program. So please suggest me some books that can help me learn to program better and have a chance in solving challenges in programming.

    Read the article

  • Are books on programming hard to understand?

    - by DarkEnergy
    I've been reading books that are extremely daunting. Accelerated C++ is by far one of the books -- that I haven't finished. I plan too, but that's another story. When reading a programming book, do you find yourself re reading a lot of the paragraphs? Sometimes it takes me like an hour to read 20 pages out of a book. Sometimes they become so daunting that it takes me all day to finish a single chapter. I think having these as e-books makes them even harder to read sometimes, since I'm so used to looking down to read a book or just looking at tangible paper. IDK, just wanting to know if reading these books becomes extremely hard, and do you find yourself rereading the most simplest paragraphs 2-3 times just to get the meaning of it because the previous paragraph left your brain hurting? http://www.it-career-coach.net/2007/03/04/are-computer-programming-books-hard-to-study/ here is a article i read on something similar to this. edit sometimes I find myself reading a whole page... then I look up and say 'wth did I just read'... I could finish a chapter in 30 minutes to an hour and feel this way too...

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >