Search Results

Search found 13604 results on 545 pages for 'programming agnostic'.

Page 318/545 | < Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >

  • what's the safest OS?

    - by Bob
    I have pretty important stuff on my PC (using Windows). All the programming files, passwords etc. And now I thought: Is that even safe to store all this information on a hard drive? What if some virus (or a pseudo-antivirus gets it) M.b. it is better to buy Mac for this purpose? I kinda don't like Linux, cause I hate making million small decisions manually (what drivers to install etc) Will like to hear some opinions.

    Read the article

  • Can I automatically map chrome's bookmarks bar to its jump list?

    - by Alex Nye
    I would like the contents of my bookmarks bar to be present in my Google Chrome jump list, without the manual tedium of managing both the bar's organization and contents and those of the jump list. If it's possible to automatically manage jump lists in such a way as to make this possible, I'd be delighted. I don't think I'm quite ready to attempt programming an extension thus myself. edit: it appears this is not possible. I have submitted the feature as a request to the chrome team.

    Read the article

  • Searching major search engines with text such as <%#

    - by Daniel Dyson
    If I type '<%# vs <%"' into any of the major search engines, everything is stripped out except the 'vs'. I understand why they do this. I would just like to know if anyone knows of a way to escape illegal characters so that they are searched properly. I know this is not strictly a programming question, but it is relevant.

    Read the article

  • Prefer extension methods for encapsulation and reusability?

    - by tzaman
    edit4: wikified, since this seems to have morphed more into a discussion than a specific question. In C++ programming, it's generally considered good practice to "prefer non-member non-friend functions" instead of instance methods. This has been recommended by Scott Meyers in this classic Dr. Dobbs article, and repeated by Herb Sutter and Andrei Alexandrescu in C++ Coding Standards (item 44); the general argument being that if a function can do its job solely by relying on the public interface exposed by the class, it actually increases encapsulation to have it be external. While this confuses the "packaging" of the class to some extent, the benefits are generally considered worth it. Now, ever since I've started programming in C#, I've had a feeling that here is the ultimate expression of the concept that they're trying to achieve with "non-member, non-friend functions that are part of a class interface". C# adds two crucial components to the mix - the first being interfaces, and the second extension methods: Interfaces allow a class to formally specify their public contract, the methods and properties that they're exposing to the world. Any other class can choose to implement the same interface and fulfill that same contract. Extension methods can be defined on an interface, providing any functionality that can be implemented via the interface to all implementers automatically. And best of all, because of the "instance syntax" sugar and IDE support, they can be called the same way as any other instance method, eliminating the cognitive overhead! So you get the encapsulation benefits of "non-member, non-friend" functions with the convenience of members. Seems like the best of both worlds to me; the .NET library itself providing a shining example in LINQ. However, everywhere I look I see people warning against extension method overuse; even the MSDN page itself states: In general, we recommend that you implement extension methods sparingly and only when you have to. (edit: Even in the current .NET library, I can see places where it would've been useful to have extensions instead of instance methods - for example, all of the utility functions of List<T> (Sort, BinarySearch, FindIndex, etc.) would be incredibly useful if they were lifted up to IList<T> - getting free bonus functionality like that adds a lot more benefit to implementing the interface.) So what's the verdict? Are extension methods the acme of encapsulation and code reuse, or am I just deluding myself? (edit2: In response to Tomas - while C# did start out with Java's (overly, imo) OO mentality, it seems to be embracing more multi-paradigm programming with every new release; the main thrust of this question is whether using extension methods to drive a style change (towards more generic / functional C#) is useful or worthwhile..) edit3: overridable extension methods The only real problem identified so far with this approach, is that you can't specialize extension methods if you need to. I've been thinking about the issue, and I think I've come up with a solution. Suppose I have an interface MyInterface, which I want to extend - I define my extension methods in a MyExtension static class, and pair it with another interface, call it MyExtensionOverrider. MyExtension methods are defined according to this pattern: public static int MyMethod(this MyInterface obj, int arg, bool attemptCast=true) { if (attemptCast && obj is MyExtensionOverrider) { return ((MyExtensionOverrider)obj).MyMethod(arg); } // regular implementation here } The override interface mirrors all of the methods defined in MyExtension, except without the this or attemptCast parameters: public interface MyExtensionOverrider { int MyMethod(int arg); string MyOtherMethod(); } Now, any class can implement the interface and get the default extension functionality: public class MyClass : MyInterface { ... } Anyone that wants to override it with specific implementations can additionally implement the override interface: public class MySpecializedClass : MyInterface, MyExtensionOverrider { public int MyMethod(int arg) { //specialized implementation for one method } public string MyOtherMethod() { // fallback to default for others MyExtension.MyOtherMethod(this, attemptCast: false); } } And there we go: extension methods provided on an interface, with the option of complete extensibility if needed. Fully general too, the interface itself doesn't need to know about the extension / override, and multiple extension / override pairs can be implemented without interfering with each other. I can see three problems with this approach - It's a little bit fragile - the extension methods and override interface have to be kept synchronized manually. It's a little bit ugly - implementing the override interface involves boilerplate for every function you don't want to specialize. It's a little bit slow - there's an extra bool comparison and cast attempt added to the mainline of every method. Still, all those notwithstanding, I think this is the best we can get until there's language support for interface functions. Thoughts?

    Read the article

  • Adventures in MVVM &ndash; ViewModel Location and Creation

    - by Brian Genisio's House Of Bilz
    More Adventures in MVVM In this post, I am going to explore how I prefer to attach ViewModels to my Views.  I have published the code to my ViewModelSupport project on CodePlex in case you'd like to see how it works along with some examples.  Some History My approach to View-First ViewModel creation has evolved over time.  I have constructed ViewModels in code-behind.  I have instantiated ViewModels in the resources sectoin of the view. I have used Prism to resolve ViewModels via Dependency Injection. I have created attached properties that use Dependency Injection containers underneath.  Of all these approaches, I continue to find issues either in composability, blendability or maintainability.  Laurent Bugnion came up with a pretty good approach in MVVM Light Toolkit with his ViewModelLocator, but as John Papa points out, it has maintenance issues.  John paired up with Glen Block to make the ViewModelLocator more generic by using MEF to compose ViewModels.  It is a great approach, but I don’t like baking in specific resolution technologies into the ViewModelSupport project. I bring these people up, not to name drop, but to give them credit for the place I finally landed in my journey to resolve ViewModels.  I have come up with my own version of the ViewModelLocator that is both generic and container agnostic.  The solution is blendable, configurable and simple to use.  Use any resolution mechanism you want: MEF, Unity, Ninject, Activator.Create, Lookup Tables, new, whatever. How to use the locator 1. Create a class to contain your resolution configuration: public class YourViewModelResolver: IViewModelResolver { private YourFavoriteContainer container = new YourFavoriteContainer(); public YourViewModelResolver() { // Configure your container } public object Resolve(string viewModelName) { return container.Resolve(viewModelName); } } Examples of doing this are on CodePlex for MEF, Unity and Activator.CreateInstance. 2. Create your ViewModelLocator with your custom resolver in App.xaml: <VMS:ViewModelLocator x:Key="ViewModelLocator"> <VMS:ViewModelLocator.Resolver> <local:YourViewModelResolver /> </VMS:ViewModelLocator.Resolver> </VMS:ViewModelLocator> 3. Hook up your data context whenever you want a ViewModel (WPF): <Border DataContext="{Binding YourViewModelName, Source={StaticResource ViewModelLocator}}"> This example uses dynamic properties on the ViewModelLocator and passes the name to your resolver to figure out how to compose it. 4. What about Silverlight? Good question.  You can't bind to dynamic properties in Silverlight 4 (crossing my fingers for Silverlight 5), but you CAN use string indexing: <Border DataContext="{Binding [YourViewModelName], Source={StaticResource ViewModelLocator}}"> But, as John Papa points out in his article, there is a silly bug in Silverlight 4 (as of this writing) that will call into the indexer 6 times when it binds.  While this is little more than a nuisance when getting most properties, it can be much more of an issue when you are resolving ViewModels six times.  If this gets in your way, the solution (as pointed out by John), is to use an IndexConverter (instantiated in App.xaml and also included in the project): <Border DataContext="{Binding Source={StaticResource ViewModelLocator}, Converter={StaticResource IndexConverter}, ConverterParameter=YourViewModelName}"> It is a bit uglier than the WPF version (this method will also work in WPF if you prefer), but it is still not all that bad.  Conclusion This approach works really well (I suppose I am a bit biased).  It allows for composability from any mechanisim you choose.  It is blendable (consider serving up different objects in Design Mode if you wish... or different constructors… whatever makes sense to you).  It works in Cider.  It is configurable.  It is flexible.  It is the best way I have found to manage View-First ViewModel hookups.  Thanks to the guys mentioned in this article for getting me to something I love using.  Enjoy.

    Read the article

  • How to Build Services from Legacy Applications

    - by Chris Falter
    The SOA consultants invaded the executive suite at your company or agency, preached the true religion, and converted the unbelievers. Now by divine imperative you must convert your legacy applications into a suite of reusable services.  But as usual, you lack the time and resources that you need in order to develop the services properly.  So you googled or bing’ed, found this blog post, and began crying in gratitude.  Yes, as the title implies, I am going to reveal my easy, 3-step, works-every-time process for converting silos of legacy applications into the inventory of services your CIO has been dreaming about.  So just close your eyes and count to 3 … now open them … and here it is…. Not. While wishful thinking is too often the coin of the IT realm, even the most naive practitioner knows that converting legacy applications into reusable services requires more than a magic wand.  The reason is simple: if your starting point is your legacy applications, then you will simply be bolting a web service technology layer on top of your legacy API.  And that legacy API is built in the image of the silo applications.  Enter the wide gate of the legacy API, follow the broad path of generating service interfaces from existing code, and you will arrive at the siloed enterprise destruction that you thought you were escaping. The Straight and Narrow Path This past week I had the opportunity to learn how the FBI Criminal Justice Information Systems department has been transitioning from silo applications to a service inventory.  Lafe Hutcheson, IT Specialist in the architecture group and fellow attendee at an SOA Architect Certification Workshop, was my guide.  Lafe has survived the chaos of an SOA initiative, so it is not surprising that he was able to return from a US Army deployment to Kabul, Afghanistan with nary a scratch.  According to Lafe, building their service inventory is a three-phase process: Model a business process.  This requires intense collaboration between the IT and business wings of the organization, of course.  The FBI uses IBM Websphere tools to model the process with BPMN. Identify candidate services to facilitate the business process. Convert the BPMN to an executable BPEL orchestration, model and develop the services, and use a BPEL engine to run the process.  The FBI uses ActiveVOS for orchestration services. The 12 Step Program to End Your Legacy API Addiction Thomas Erl has documented a process for building a web service inventory that is quite similar to the FBI process. Erl’s process adds a technology architecture definition phase, which allows for the technology environment to influence the inventory blueprint.  For example, if you are using an enterprise service bus, you will probably not need to build your own utility services for logging or intermediate routing.  Erl also lists a service-oriented analysis phase that highlights the 12-step process of applying the principles of service orientation to modeling your services.  Erl depicts the modeling of a service inventory as an iterative process: model a business process, define the relevant technology architecture, define the service inventory blueprint, analyze the services, then model another business process, rinse and repeat.  (Astute readers will note that Erl’s diagram, restricted to analysis and modeling process, does not include the implementation phase that concludes the FBI service development methodology.) The service-oriented analysis phase is where you find the 12 steps that will free you from your legacy API addiction. In a nutshell, you identify the steps in the process that need services; identify the different types of services (agnostic entity services, service compositions, and utility services) that are required; apply service-orientation principles; and normalize the inventory into cohesive service models. Rather than discuss each of the 12 steps individually, I will close by simply referring my readers to Erl’s explanation.

    Read the article

  • Orchestrating the Virtual Enterprise, Part I

    - by Kathryn Perry
    A guest post by Jon Chorley, Oracle's Chief Sustainability Officer & Vice President, SCM Product Strategy During the American Industrial Revolution, the Ford Motor Company did it all. It turned raw materials into a showroom full of Model Ts. It owned a steel mill, a glass factory, and an automobile assembly line. The company was both self-sufficient and innovative and went on to become one of the largest and most profitable companies in the world. Nowadays, it's unusual for any business to follow this vertical integration model because its much harder to be best in class across such a wide a range of capabilities and services. Instead, businesses focus on their core competencies and outsource other business functions to specialized suppliers. They exchange vertical integration for collaboration. When done well, all parties benefit from this arrangement and the collaboration leads to the creation of an agile, lean and successful "virtual enterprise." Case in point: For Sun hardware, Oracle outsources most of its manufacturing and all of its logistics to third parties. These are vital activities, but ones where Oracle doesn't have a core competency, so we shift them to business partners who do. Within our enterprise, we always retain the core functions of product development, support, and most of the sales function, because that's what constitutes our core value to our customers. This is a perfect example of a virtual enterprise.  What are the implications of this? It means that we must exchange direct internal control for indirect external collaboration. This fundamentally changes the relative importance of different business processes, the boundaries of security and information sharing, and the relationship of the supply chain systems to the ERP. The challenge is that the systems required to support this virtual paradigm are still mired in "island enterprise" thinking. But help is at hand. Developments such as the Web, social networks, collaboration, and rules-based orchestration offer great potential to fundamentally re-architect supply chain systems to better support the virtual enterprise.  Supply Chain Management Systems in a Virtual Enterprise Historically enterprise software was constructed to automate the ERP - and then the supply chain systems extended the ERP. They were joined at the hip. In virtual enterprises, the supply chain system needs to be ERP agnostic, sitting above each of the ERPs that are distributed across the virtual enterprise - most of which are operating in other businesses. This is vital so that the supply chain system can manage the flow of material and the related information through the multiple enterprises. It has to have strong collaboration tools. It needs to be highly flexible. Users need to be able to see information that's coming from multiple sources and be able to react and respond to events across those sources.  Oracle Fusion Distributed Order Orchestration (DOO) is a perfect example of a supply chain system designed to operate in this virtual way. DOO embraces the idea that a company's fulfillment challenge is a distributed, multi-enterprise problem. It enables users to manage the process and the trading partners in a uniform way and deliver a consistent user experience while operating over a heterogeneous, virtual enterprise. This is a fundamental shift at the core of managing supply chains. It forces virtual enterprises to think architecturally about how best to construct their supply chain systems. In my next post, I will share examples of companies that have made that shift and talk more about the distributed orchestration process.

    Read the article

  • Is it bad practice to make an iterator that is aware of its own end

    - by aaronman
    For some background of why I am asking this question here is an example. In python the method chain chains an arbitrary number of ranges together and makes them into one without making copies. Here is a link in case you don't understand it. I decided I would implement chain in c++ using variadic templates. As far as I can tell the only way to make an iterator for chain that will successfully go to the next container is for each iterator to to know about the end of the container (I thought of a sort of hack in where when != is called against the end it will know to go to the next container, but the first way seemed easier and safer and more versatile). My question is if there is anything inherently wrong with an iterator knowing about its own end, my code is in c++ but this can be language agnostic since many languages have iterators. #ifndef CHAIN_HPP #define CHAIN_HPP #include "iterator_range.hpp" namespace iter { template <typename ... Containers> struct chain_iter; template <typename Container> struct chain_iter<Container> { private: using Iterator = decltype(((Container*)nullptr)->begin()); Iterator begin; const Iterator end;//never really used but kept it for consistency public: chain_iter(Container & container, bool is_end=false) : begin(container.begin()),end(container.end()) { if(is_end) begin = container.end(); } chain_iter & operator++() { ++begin; return *this; } auto operator*()->decltype(*begin) { return *begin; } bool operator!=(const chain_iter & rhs) const{ return this->begin != rhs.begin; } }; template <typename Container, typename ... Containers> struct chain_iter<Container,Containers...> { private: using Iterator = decltype(((Container*)nullptr)->begin()); Iterator begin; const Iterator end; bool end_reached = false; chain_iter<Containers...> next_iter; public: chain_iter(Container & container, Containers& ... rest, bool is_end=false) : begin(container.begin()), end(container.end()), next_iter(rest...,is_end) { if(is_end) begin = container.end(); } chain_iter & operator++() { if (begin == end) { ++next_iter; } else { ++begin; } return *this; } auto operator*()->decltype(*begin) { if (begin == end) { return *next_iter; } else { return *begin; } } bool operator !=(const chain_iter & rhs) const { if (begin == end) { return this->next_iter != rhs.next_iter; } else return this->begin != rhs.begin; } }; template <typename ... Containers> iterator_range<chain_iter<Containers...>> chain(Containers& ... containers) { auto begin = chain_iter<Containers...>(containers...); auto end = chain_iter<Containers...>(containers...,true); return iterator_range<chain_iter<Containers...>>(begin,end); } } #endif //CHAIN_HPP

    Read the article

  • Philosophy of [WebInvoke(ResponseFormat = WebMessageFormat.Json)]

    - by Mikey Cee
    Hi everyone, I'm writing what I'm referring to as a POJ (Plain Old JSON) WCF web service - one that takes and emits standard JSON with none of the crap that ASP.NET Ajax likes to add to it. It seems that there are three steps to accomplish this: Change to in the endpoint's tag Decorate the method with [WebInvoke(ResponseFormat = WebMessageFormat.Json)] Add an incantation of [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] to the service contract This is all working OK for me - I can pass in and am being returned nice plain JSON. If I remove the WebInvoke attribute, then I get XML returned instead, so it is certainly doing what it is supposed to do. But it strikes me as odd that the option to specify JSON output appears here and not in the configuration file. Say I wanted to expose my method as an XML endpoint too - how would I do this? Currently the only way I can see would be to have a second method that does exactly the same thing but does not have WebMethodFormat.Json specified. Then rinse and repeat for every method in my service? Yuck. Specifying that the output should be serialized to JSON in the attribute seems to be completely contrary to the philosophy of WCF, where the service is implemented is a transport and encoding agnostic manner, leaving the nasty details of how the data will be moved around to the configuration file. Is there a better way of doing what I want to do? Or are we stuck with this awkward attribute? Or do I not understanding WCF deeply enough?

    Read the article

  • Visual Studio 2010 64-bit COM Interop Issue

    - by Adam Driscoll
    I am trying to add a VC6 COM DLL to our VS2010RC C# solution. The DLL was compiled with the VC6 tools to create an x86 version and was compiled with the VC7 Cross-platform tools to generate a VC7 DLL. The x86 version of the assembly works fine as long as the consuming C# project's platform is set to x86. It doesn't matter whether the x64 or the x86 version of the DLL is actually registered. It works with both. If the platform is set to 'Any CPU' I receive a BadImageFormatException on the load of the Interop.<name>.dll. As for the x64 version, I cannot even get the project to build. I receive the tlbimp error: TlbImp : error TI0000: A single valid machine type compatible with the input type library must be specified. Has anyone seen this issue? EDIT: I've done a lot more digging into this issue and think this may be a Visual Studio bug. I have a clean solution. I bring in my COM assembly with language agnostic 'Any CPU' selected. The process architecture of the resulting Interop DLL is x86 rather than MSIL. May have to make the Interop by hand for now to get this to work. If anyone has another suggestion let me know.

    Read the article

  • Metaprogramming - self explanatory code - tutorials, articles, books

    - by elena
    Hello everybody, I am looking into improving my programming skils (actually I try to do my best to suck less each year, as our Jeff Atwood put it), so I was thinking into reading stuff about metaprogramming and self explanatory code. I am looking for something like an idiot's guide to this (free books for download, online resources). Also I want more than your average wiki page and also something language agnostic or preferably with Java examples. Do you know of such resources that will allow to efficiently put all of it into practice (I know experience has a lot to say in all of this but i kind of want to build experience avoiding the flow bad decisions - experience - good decisions)? EDIT: Something of the likes of this example from the Pragmatic Programmer: ...implement a mini-language to control a simple drawing package... The language consists of single-letter commands. Some commands are followed by a single number. For example, the following input would draw a rectangle: P 2 # select pen 2 D # pen down W 2 # draw west 2cm N 1 # then north 1 E 2 # then east 2 S 1 # then back south U # pen up Thank you!

    Read the article

  • How do you use asynchronous ORMs without huge callback chains?

    - by hornairs
    I'm using the relatively immature Joose Javascript ORM plugin (project page) to persist objects in an Appcelerator Titanium (company page) mobile project. Since it's client side storage, the application has to check to see if the database is initialized before starting up the ORM since it inspects the DB tables to construct the classes. My problem is that this sequence of operations (and if this one is like this, other things down the road) takes a lot of callbacks to complete. I have a lot of jumping around in the code that isn't apparent to a maintainer and results in some complex call graphs and whatnot. So, I ask these questions: How would you asynchronously initialize a database and populate it with seed data using an ORM that needs the schema to be correct to function? Do you have any general strategies or links for async/event driven programming and keeping the call graph simple and understandable? Do you have any suggestions for Javascript ORMs/meta object systems that work with HTML 5 as a storage engine and are hopefully framework agnostic? Am I just a big newb and should be able to work this out with ease? Thanks folks!

    Read the article

  • Bit reversal of an integer, ignoring integer size and endianness

    - by ??O?????
    Given an integer typedef: typedef unsigned int TYPE; or typedef unsigned long TYPE; I have the following code to reverse the bits of an integer: TYPE max_bit= (TYPE)-1; void reverse_int_setup() { TYPE bits= (TYPE)max_bit; while (bits <<= 1) max_bit= bits; } TYPE reverse_int(TYPE arg) { TYPE bit_setter= 1, bit_tester= max_bit, result= 0; for (result= 0; bit_tester; bit_tester>>= 1, bit_setter<<= 1) if (arg & bit_tester) result|= bit_setter; return result; } One just needs first to run reverse_int_setup(), which stores an integer with the highest bit turned on, then any call to reverse_int(arg) returns arg with its bits reversed (to be used as a key to a binary tree, taken from an increasing counter, but that's more or less irrelevant). Is there a platform-agnostic way to have in compile-time the correct value for max_int after the call to reverse_int_setup(); Otherwise, is there an algorithm you consider better/leaner than the one I have for reverse_int()? Thanks.

    Read the article

  • Binding XML in Sliverlight without Nominal Classes

    - by AnthonyWJones
    Lets say I have a simple chunck of XML:- <root> <item forename="Fred" surname="Flintstone" /> <item forename="Barney" surname="Rubble" /> </root> Having fetched this XML in Silverlight I would like to bind it with xaml of this ilke:- <ListBox x:Name="ItemList" Style="{StaticResource Items}"> <ListBox.ItemTemplate> <DataTemplate> <StackPanel Orientation="Horizontal"> <TextBox Text="{Binding Forename}" /> <TextBox Text="{Binding Surname}" /> </StackPanel> </DataTemplate> </ListBox.ItemTemplate> </ListBox> Now I can bind simply enough with LINQ to XML and a nominal class:- public class Person { public string Forename {get; set;} public string Surname {get; set;} } So here is the question, can it be done without this class? IOW coupling between the Sliverlight code and the input XML is limited to the XAML only, other source code is agnostic to the set of attributes on the item element. Edit: The use of XSD is suggested but ultimately it amounts the same thing. XSD-Generated class. Edit: An anonymous class doesn't work, Silverlight can't bind them. Edit: This needs to be two way, the user needs to be able to edit the values and these value end up in the XML. (Changed original TextBlock to TextBox in sample above).

    Read the article

  • What is your best-practice advice on implementing SQL stored procedures (in a C# winforms applicatio

    - by JYelton
    I have read these very good questions on SO about SQL stored procedures: When should you use stored procedures? and Are Stored Procedures more efficient, in general, than inline statements on modern RDBMS’s? I am a beginner on integrating .NET/SQL though I have used basic SQL functionality for more than a decade in other environments. It's time to advance with regards to organization and deployment. I am using .NET C# 3.5, Visual Studio 2008 and SQL Server 2008; though this question can be regarded as language- and database- agnostic, meaning that it could easily apply to other environments that use stored procedures and a relational database. Given that I have an application with inline SQL queries, and I am interested in converting to stored procedures for organizational and performance purposes, what are your recommendations for doing so? Here are some additional questions in my mind related to this subject that may help shape the answers: Should I create the stored procedures in SQL using SQL Management Studio and simply re-create the database when it is installed for a client? Am I better off creating all of the stored procedures in my application, inside of a database initialization method? It seems logical to assume that creating stored procedures must follow the creation of tables in a new installation. My database initialization method creates new tables and inserts some default data. My plan is to create stored procedures following that step, but I am beginning to think there might be a better way to set up a database from scratch (such as in the installer of the program). Thoughts on this are appreciated. I have a variety of queries throughout the application. Some queries are incredibly simple (SELECT id FROM table) and others are extremely long and complex, performing several joins and accepting approximately 80 parameters. Should I replace all queries with stored procedures, or only those that might benefit from doing so? Finally, as this topic obviously requires some research and education, can you recommend an article, book, or tutorial that covers the nuances of using stored procedures instead of direct statements?

    Read the article

  • Find unique vertices from a 'triangle-soup'

    - by sum1stolemyname
    I am building a CAD-file converter on top of two libraries (Opencascade and DWF Toolkit). However, my question is plattform agnostic: Given: I have generated a mesh as a list of triangular faces form a model constructed through my application. Each Triangle is defined through three vertexes, which consist of three floats (x, y & z coordinate). Since the triangles form a mesh, most of the vertices are shared by more then one triangle. Goal: I need to find the list of unique vertices, and to generate an array of faces consisting of tuples of three indices in this list. What i want to do is this: //step 1: build a list of unique vertices for each triangle for each vertex in triangle if not vertex in listOfVertices Add vertex to listOfVertices //step 2: build a list of faces for each triangle for each vertex in triangle Get Vertex Index From listOfvertices AddToMap(vertex Index, triangle) While I do have an implementation which does this, step1 (the generation of the list of unique vertices) is really slow in the order of O(n!), since each vertex is compared to all vertices already in the list. I thought "Hey, lets build a hashmap of my vertices' components using std::map, that ought to speed things up!", only to find that generating a unique key from three floating point values is not a trivial task. Here, the experts of stackoverflow come into play: I need some kind of hash-function which works on 3 floats, or any other function generating a unique value from a 3d-vertex position.

    Read the article

  • Workflow engine BPMN, Drools, etc or ESB?

    - by Tom
    We currently have an application that is based on an in-house developed workflow engine with YAML based DSL. We are looking to move parts of it to Java. I have discovered a number of java solutions like Intalio, JBPM, Drools Expert, Drools Flow etc. They appear to be aimed at businesses where the business analyst creates the workflows using a graphical editor and submits them to the workflow engine. They seem geared towards ease of use for non-technical people rather than for developers with a focus on human interaction. The workflows tend to look like. Discover-a-file -\ -> join -> process-file -> move-file -> register-file Discover-some-metadata -/ If any step fails we need to retry it X times. We also need to be able to stop the system and be able to restart it and have it continue from where it was (durable). Some of our workflows can be defined by a set of goals we need to achieve so Jess's backwards rule chaining sounds interesting but it is not open source. It might be that what we are after is a Finite State Machine engine or just an Enterprise Service Bus and do everything as JMS queues. Is there a good open source workflow engine that is both standards-based but also geared towards developers. We don't particular want to use a graphical workflow designer or write reams of XML and it should ideally be in Java or language agnostic (makes REST/Soap calls to external services). Thanks, Tom

    Read the article

  • Newbie question about Java

    - by Rob Nicholson
    Okay, I know that Java is a language but somebody has asked me if they can write a web application to interface in with a web app I've written in ASP.NET. I'm implementing a web service to serve up an XML so it's pretty language agnostic. However, I'm not 100% sure whether going down the Java route makes a lot of sense. I was kind of expecting PHP or ASP.NET server side code with maybe some Ajax/JavaScript or maybe a heavier client JavaScript program using JScript. Could some kind sole explain the basic Java environment when it comes with webapps. I've inferred the following - am I barking up the right tree? Java when run like ASP.NET is called JSP JavaBeans is a bit like the .NET framework, i.e. it's a library of re-usable components Java EE is a bit like ASP.NET in that it's a framework for building web pages on a server Java can also run on the client but it needs the Java VM installing When running Java on the client, can you use JavaBeans and is there a framework? Can it also use JScript? I don't think so as JScript is JavaScript library. Whilst running Java on the server would be okay, this is a relatively small application and therefore Java sounds like a bit of overkill. PHP or ASP.NET feels a better fit. But I don't think they should go down the Java applet in the browser and it adds complexity that's not needed. Thanks, Rob.

    Read the article

  • Is it possible that a single-threaded program is executed simultaneously on more than one CPU core?

    - by Wolfgang Plaschg
    When I run a single-threaded program that i have written on my quad core Intel i can see in the Windows Task Manager that actually all four cores of my CPU are more or less active. One core is more active than the other three, but there is also activity on those. There's no other program (besided the OS kernel of course) running that would be plausible for that activitiy. And when I close my program all activity an all cores drops down to nearly zero. All is left is a little "noise" on the cores, so I'm pretty sure all the visible activity comes directly or indirectly (like invoking system routines) from my program. Is it possible that the OS or the cores themselves try to balance some code or execution on all four cores, even it's not a multithreaded program? Do you have any links that documents this technique? Some infos to the program: It's a console app written in Qt, the Task Manager states that only one thread is running. Maybe Qt uses threads, but I don't use signals or slots, nor any GUI. Link to Task Manager screenshot: http://img97.imageshack.us/img97/6403/taskmanager.png This question is language agnostic and not tied to Qt/C++, i just want to know if Windows or Intel do to balance also single-threaded code on all cores. If they do, how does this technique work? All I can think of is, that kernel routines like reading from disk etc. is scheduled on all cores, but this won't improve performance significantly since the code still has to run synchronous to the kernel api calls. EDIT Do you know any tools to do a better analysis of single and/or multi-threaded programs than the poor Windows Task Manager?

    Read the article

  • mod_cgi , mod_fastcgi, mod_scgi , mod_wsgi, mod_python, FLUP. I don't know how many more. what is mo

    - by claws
    I recently learnt Python. I liked it. I just wanted to use it for web development. This thought caused all the troubles. But I like these troubles :) Coming from PHP world where there is only one way standardized. I expected the same and searched for python & apache. http://stackoverflow.com/questions/449055/setting-up-python-on-windows-apache says Stay away from mod_python. One common misleading idea is that mod_python is like mod_php, but for python. That is not true. So what is equivalent of mod_php in python? I need little clarification on this one http://stackoverflow.com/questions/219110/how-python-web-frameworks-wsgi-and-cgi-fit-together CGI, FastCGI and SCGI are language agnostic. You can write CGI scripts in Perl, Python, C, bash, or even Assembly :). So, I guess mod_cgi , mod_fastcgi, mod_scgi are their corresponding apache modules. Right? WSGI is some kind of optimized/improved inshort an efficient version specifically designed for python language only. In order to use this mod_wsgi is a way to go. right? This leaves out mod_python. What is it then? Apache - mod_fastcgi - FLUP (via CGI protocol) - Django (via WSGI protocol) Flup is another way to run with wsgi for any webserver that can speak FCGI, SCGI or AJP What is FLUP? What is AJP? How did Django come in the picture? These questions raise quetions about PHP. How is it actually running? What technology is it using? mod_php & mod_python what are the differences? In future if I want to use Perl or Java then again will I have to get confused? Kindly can someone explain things clearly and give a Complete Picture.

    Read the article

  • Why put a DAO layer over a persistence layer (like JDO or Hibernate)

    - by Todd Owen
    Data Access Objects (DAOs) are a common design pattern, and recommended by Sun. But the earliest examples of Java DAOs interacted directly with relational databases -- they were, in essence, doing object-relational mapping (ORM). Nowadays, I see DAOs on top of mature ORM frameworks like JDO and Hibernate, and I wonder if that is really a good idea. I am developing a web service using JDO as the persistence layer, and am considering whether or not to introduce DAOs. I foresee a problem when dealing with a particular class which contains a map of other objects: public class Book { // Book description in various languages, indexed by ISO language codes private Map<String,BookDescription> descriptions; } JDO is clever enough to map this to a foreign key constraint between the "BOOKS" and "BOOKDESCRIPTIONS" tables. It transparently loads the BookDescription objects (using lazy loading, I believe), and persists them when the Book object is persisted. If I was to introduce a "data access layer" and write a class like BookDao, and encapsulate all the JDO code within this, then wouldn't this JDO's transparent loading of the child objects be circumventing the data access layer? For consistency, shouldn't all the BookDescription objects be loaded and persisted via some BookDescriptionDao object (or BookDao.loadDescription method)? Yet refactoring in that way would make manipulating the model needlessly complicated. So my question is, what's wrong with calling JDO (or Hibernate, or whatever ORM you fancy) directly in the business layer? Its syntax is already quite concise, and it is datastore-agnostic. What is the advantage, if any, of encapsulating it in Data Access Objects?

    Read the article

  • Embedding Lua functions as member variables in Java

    - by Zarion
    Although the program I'm working on is in Java, answering this from a C perspective is also fine, considering that most of this is either language-agnostic, or happens on the Lua side of things. In the outline I have for the architecture of a game I'm programming, individual types of game objects within a particular class (eg: creatures, items, spells, etc.) are loaded from a data file. Most of their properties are simple data types, but I'd like a few of these members to actually contain simple scripts that define, for example, what an item does when it's used. The scripts will be extremely simple, since all fundamental game actions will be exposed through an API from Java. The Lua is simply responsible for stringing a couple of these basic functions together, and setting arguments. The question is largely about the best way to store a reference to a specific Lua function as a member of a Java class. I understand that if I store the Lua code as a string and call lua_dostring, Lua will compile the code fresh every time it's called. So the function needs to be defined somehow, and a reference to this specific function wrapped in a Java function object. One possibility that I've considered is, during the data loading process, when the loader encounters a script definition in a data file, it extracts this string, decorates the function name using the associated object's unique ID, calls lua_dostring on the string containing a full function definition, and then wraps the generated function name in a Java function object. A function declared in script run with lua_dostring should still be added to the global function table, correct? I'm just wondering if there's a better way of going about this. I admit that my knowledge of Lua at this point is rather superficial and theoretical, so it's possible that I'm overlooking something obvious.

    Read the article

  • Why is OpenSubKey() returning null on my Win 7 64 bit system?

    - by BrMcMullin
    Has anyone seen OpenSubKey() and other Microsoft.Win32 registry functions return null on 64 bit systems when 32 bit registry keys are under Wow6432node in the registry? I'm working on a unit testing framework that makes a call to OpenSubKey() from the .net library. My dev system is a Win 7 64 bit environment with VS 2008 SP1 and the Win 7 SDK installed. The application we're unit testing is a 32 bit application, so the registry is virtualized under HKLM\Software\Wow6432node. When we call: Registry.LocalMachine.OpenSubKey( @"Software\MyCompany\MyApp\" ); Null is returned, however explicitly stating to look here works: Registry.LocalMachine.OpenSubKey( @"Software\Wow6432node\MyCompany\MyApp\" ); From what I understand this function should be agnostic to 32 bit or 64 bit environments and should know to jump to the virtual node. Even stranger is the fact that the exact same call inside a compiled and installed version of our application is running just fine on the same system and is getting the registry keys necessary to run; which are also being placed in HKLM\Software\Wow6432node. Any suggestions? Thanks in advance!

    Read the article

  • Parsing basic math equations for children's educational software?

    - by Simucal
    Inspired by a recent TED talk, I want to write a small piece of educational software. The researcher created little miniature computers in the shape of blocks called "Siftables". [David Merril, inventor - with Siftables in the background.] There were many applications he used the blocks in but my favorite was when each block was a number or basic operation symbol. You could then re-arrange the blocks of numbers or operation symbols in a line, and it would display an answer on another siftable block. So, I've decided I wanted to implemented a software version of "Math Siftables" on a limited scale as my final project for a CS course I'm taking. What is the generally accepted way for parsing and interpreting a string of math expressions, and if they are valid, perform the operation? Is this a case where I should implement a full parser/lexer? I would imagine interpreting basic math expressions would be a semi-common problem in computer science so I'm looking for the right way to approach this. For example, if my Math Siftable blocks where arranged like: [1] [+] [2] This would be a valid sequence and I would perform the necessary operation to arrive at "3". However, if the child were to drag several operation blocks together such as: [2] [\] [\] [5] It would obviously be invalid. Ultimately, I want to be able to parse and interpret any number of chains of operations with the blocks that the user can drag together. Can anyone explain to me or point me to resources for parsing basic math expressions? I'd prefer as much of a language agnostic answer as possible.

    Read the article

  • How to setup Lucene search for a B2B web app?

    - by Bill Paetzke
    Given: 5000 databases (spread out over 5 servers) 1 database per client (so you can infer there are 1000 clients) 2 to 2000 users per client (let's say avg is 100 users per client) Clients (databases) come and go every day (let's assume most remain for at least one year) Let's stay agnostic of language or sql brand, since Lucene (and Solr) have a breadth of support The Question: How would you setup Lucene search so that each client can only search within its database? How would you setup the index(es)? Would you need to add a filter to all search queries? If a client cancelled, how would you delete their (part of the) index? (this may be trivial--not sure yet) Possible Solutions: Make an index for each client (database) Pro: Search is faster (than one-index-for-all method). Indices are relative to the size of the client's data. Con: I'm not sure what this entails, nor do I know if this is beyond Lucene's scope. Have a single, gigantic index with a database_name field. Always include database_name as a filter. Pro: Not sure. Maybe good for tech support or billing dept to search all databases for info. Con: Search is slower (than index-per-client method). Flawed security if query filter removed. For Example: Joel Spolsky said in Podcast #11 that his hosted web app product, FogBugz On-Demand, uses Lucene. He has thousands of on-demand clients. And each client gets their own database. His situation is quite similar to mine. Although, he didn't elaborate on the setup (particularly indices); hence, the need for this question. One last thing: I would also accept an answer that uses Solr (the extension of Lucene). Perhaps it's better suited for this problem. Not sure.

    Read the article

< Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >