Search Results

Search found 16054 results on 643 pages for 'reference architecture'.

Page 494/643 | < Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >

  • wpf command pattern

    - by evan
    I have a wpf gui which displays a list of information in separate window and in a separate thread from the main application. As the user performs actions in the main window the side window is updated. (For example if you clicked page down in the main window a listbox in the side window would page down). Right now the architecture for this application feels very messy and I'm sure there is a cleaner way to do it. It looks like this: Main Window contains a singleton SideWindowControl which communicates with an instance of the SideWindowDisplay using events - so, for example, the pagedown button would work like: 1) the event handler of the button on the main window calls SideWindowControl.PageDown() 2) in the PageDown() function a event is created and thrown. 3) finally the gui, ShowSideWindowDisplay is subscribing to the SideWindowControl.Actions event handles the event and actually scrolls the listbox down - note because it is in a different thread it has to do that by running the command via Dispatcher.Invoke() This just seems like a very messy way to this and there must be a clearer way (The only part that can't change is that the main window and the side window must be on different threads). Perhaps using WPF commands? I'd really appreciate any suggestions!! Thanks

    Read the article

  • Threading calls to web service in a web service - (.net 2.0)

    - by Ryan Ternier
    Got a question regarding best practices for doing parallel web service calls, in a web service. Our portal will get a message, split that message into 2 messages, and then do 2 calls to our broker. These need to be on separate threads to lower the timeout. One solution is to do something similar to (pseudo code): XmlNode DNode = GetaGetDemoNodeSomehow(); XmlNode ENode = GetAGetElNodeSomehow(); XmlNode elResponse; XmlNode demResponse; Thread dThread = new Thread(delegate { //Web Service Call GetDemographics d = new GetDemographics(); demResponse = d.HIALRequest(DNode); }); Thread eThread = new Thread(delegate { //Web Service Call GetEligibility ge = new GetEligibility(); elResponse = ge.HIALRequest(ENode); }); dThread.Start(); eThread.Start(); dThread.Join(); eThread.Join(); //combine the resulting XML and return it. //Maybe throw a bit of logging in to make architecture happy Another option we thought of is to create a worker class, and pass it the service information and have it execute. This would allow us to have a bit more control over what is going on, but could add additional overhead. Another option brought up would be 2 asynchronous calls and manage the returns through a loop. When the calls are completed (success or error) the loop picks it up and ends. The portal service will be called about 50,000 times a day. I don't want to gold plate this sucker. I'm looking for something light weight. The services that are being called on the broker do have time out limits set, and are already heavily logged and audited, so I'm not worried on that part. This is .NET 2.0 , and as much as I would love to upgrade I can't right now. So please leave all the goodies of 2.0 out please.

    Read the article

  • Trying to make a plugin system in C++

    - by Pirate for Profit
    I'm making a task-based program that needs to have plugins. Tasks need to have properties which can be easily edited, I think this can be done with Qt's Meta-Object Compiler reflection capabilities (I could be wrong, but I should be able to stick this in a QtPropertyBrowser?) So here's the base: class Task : public QObject { Q_OBJECT public: explicit Task(QObject *parent = 0) : QObject(parent){} virtual void run() = 0; signals: void taskFinished(bool success = true); } Then a plugin might have this task: class PrinterTask : public Task { Q_OBJECT public: explicit PrinterTask(QObject *parent = 0) : Task(parent) {} void run() { Printer::getInstance()->Print(this->getData()); // fictional emit taskFinished(true); } inline const QString &getData() const; inline void setData(QString data); Q_PROPERTY(QString data READ getData WRITE setData) // for reflection } In a nutshell, here's what I want to do: // load plugin // find all the Tasks interface implementations in it // have user able to choose a Task and edit its specific Q_PROPERTY's // run the TASK It's important that one .dll has multiple tasks, because I want them to be associated by their module. For instance, "FileTasks.dll" could have tasks for deleting files, making files, etc. The only problem with Qt's plugin setup is I want to store X amount of Tasks in one .dll module. As far as I can tell, you can only load one interface per plugin (I could be wrong?). If so, the only possible way to do accomplish what I want is to create a FactoryInterface with string based keys which return the objects (as in Qt's Plug-And-Paint example), which is a terrible boilerplate that I would like to avoid. Anyone know a cleaner C++ plugin architecture than Qt's to do what I want? Also, am I safely assuming Qt's reflection capabilities will do what I want (i.e. able to edit an unknown dynamically loaded tasks' properties with the QtPropertyBrowser before dispatching)?

    Read the article

  • Creating a RESTful API - HELP!

    - by Martin Cox
    Hi Chaps Over the last few weeks I've been learning about iOS development, which has naturally led me into the world of APIs. Now, searching around on the Internet, I've come to the conclusion that using the REST architecture is very much recommended - due to it's supposed simplicity and ease of implementation. However, I'm really struggling with the implementation side of REST. I understand the concept; using HTTP methods as verbs to describe the action of a request and responding with suitable response codes, and so on. It's just, I don't understand how to code it. I don't get how I map a URI to an object. I understand that a GET request for domain.com/api/user/address?user_id=999 would return the address of user 999 - but I don't understand where or how that mapping from /user/address to some method that queries a database has taken place. Is this all coded in one php script? Would I just have a method that grabs the URI like so: $array = explode("/", ltrim(rtrim($_SERVER['REQUEST_URI'], "/"), "/")) And then cycle through that array, first I would have a request for a "user", so the PHP script would direct my request to the user object and then invoke the address method. Is that what actually happens? I've probably not explained my thought process very well there. The main thing I'm not getting is how that URI /user/address?id=999 somehow is broken down and executed - does it actually resolve to code? class user(id) { address() { //get user address } } I doubt I'm making sense now, so I'll call it a day trying to explain further. I hope someone out there can understand what I'm trying to say! Thanks Chaps, look forward to your responses. Martin p.s - I'm not a developer yet, I'm learning :)

    Read the article

  • This is a great job opportunity!!! [closed]

    - by Stuart Gordon
    ASP.NET MVC Web Developer / London / £450pd / £25-£50,000pa / Interested contact [email protected] ! As a web developer within the engineering department, you will work with a team of enthusiastic developers building a new ASP.NET MVC platform for online products utilising exciting cutting edge technologies and methodologies (elements of Agile, Scrum, Lean, Kanban and XP) as well as developing new stand-alone web products that conform to W3C standards. Key Responsibilities and Objectives: Develop ASP.NET MVC websites utilising Frameworks and enterprise search technology. Develop and expand content management and delivery solutions. Help maintain and extend existing products. Formulate ideas and visions for new products and services. Be a proactive part of the development team and provide support and assistance to others when required. Qualification/Experience Required: The ideal candidate will have a web development background and be educated to degree level in a Computer Science/IT related course plus ASP.NET MVC experience. The successful candidate needs to be able to demonstrate commercial experience in all or most of the following skills: Essential: ASP.NET MVC with C# (Visual Studio), Castle, nHibernate, XHTML and JavaScript. Experience of Test Driven Development (TDD) using tools such as NUnit. Preferable: Experience of Continuous Integration (TeamCity and MSBuild), SQL Server (T-SQL), experience of source control such as Subversion (plus TortioseSVN), JQuery. Learn: Fluent NHibernate, S#arp Architecture, Spark (View engine), Behaviour Driven Design (BDD) using MSpec. Furthermore, you will possess good working knowledge of W3C web standards, web usability, web accessibility and understand the basics of search engine optimisation (SEO). You will also be a quick learner, have good communication skills and be a self-motivated and organised individual.

    Read the article

  • Why do .NET developers offer 32-bit/64-bit versions of .NET assemblies?

    - by Tyler
    Evey now and then I see both x86 and x64 versions of a .NET assembly. Consider the following web part for SharePoint. Why wouldn't the developer just offer a single version and have let the JIT compiler sort out the rest? When I see these kinds offering is it just that the developer decided to create a native image using a tool like ngen in order to avoid a JIT? Someone please help me out here, I feel like I'm missing something of note. Updated From what I got below, both x86 and x64 builds are offered because one or more of the following reasons: The developer wanted to avoid JITing and created a native image of his code, targeting a given architecture using a tool like ngen.exe. The assembly contains platform specific COM calls and so it makes no point to build it as AnyCPU. In these cases builds that target different platforms may contain different code. The assembly may contain Win32 calls using pinvoke which won't get remapped by a JIT and so the build should target the platform it is bound to.

    Read the article

  • Mulltiple configurations in Qt

    - by user360607
    Hi all! I'm new to Qt Creator and I have several questions regarding multiple build configurations. A side note: I have the QtCreator 1.3.1 installed on my Linux machine. I need to have two configurations in my Qt Creator project. The thing is that these aren't simply debug and release but are based on the target architecture - x86 or x64. I came across http://stackoverflow.com/questions/2259192/building-multiple-targets-in-qt-qmake and from that I went trying something like: Conf_x86 { TARGET = MyApp_x86 } Conf_x64 { TARGET = MyApp_x64 } This way however I don't seems to be able to use the Qt Creator IDE to build each of these separately (Build All, Rebuild All, etc. options from the IDE menu). Is there a way to achieve this - may be even show Conf_x86 and Conf_x64 as new build configurations in Qt Creator? One other thing the Qt I have is 64 bit so by default the target built using Qt Creator IDE will also be 64 bit. I noticed that the effective qmake call in the build step includes the following option '-spec linux-g++-64'. I also noticed that should I add '-spec linux-g++-32' in 'Additional arguments' it would override '-spec linux-g++-64' and the resulting target will be 32 bit. How can I achieve this by simply editing the contents of the .pro file? I saw that all these changes are initially saved in the .pro.user file but does doesn't suit me at all. I need to be able to make these configurations from the .pro file if possible. Any help will be appreciated. 10x in advance!

    Read the article

  • nhibernate mapping: delete collection, insert new collection with old IDs

    - by npeBeg
    my issue lokks similar to this one: (link) but i have one-to-many association: <set name="Fields" cascade="all-delete-orphan" lazy="false" inverse="true"> <key column="[TEMPLATE_ID]"></key> <one-to-many class="MyNamespace.Field, MyLibrary"/> </set> (i also tried to use ) this mapping is for Template object. this one and the Field object has their ID generators set to identity. so when i call session.Update for the Template object it works fine, well, almost: if the Field object has an Id number, UPDATE sql request is called, if the Id is 0, the INSERT is performed. But if i delete a Field object from the collection it has no effect for the Database. I found that if i also call session.Delete for this Field object, everything will be ok, but due to client-server architecture i don't know what to delete. so i decided to delete all the collection elements from the DB and call session.Update with a new collection. and i've got an issue: nhibernate performs the UPDATE operation for the Field objects that has non-zero Id, but they are removed from DB! maybe i should use some other Id generator or smth.. what is the best way to make nhibernate perform "delete all"/"insert all" routine for the collection?

    Read the article

  • What's the best way to store Logon User information for Web Application?

    - by Morgan Cheng
    I was once in a project of web application developed on ASP.NET. For each logon user, there is an object (let's call it UserSessionObject here) created and stored in RAM. For each HTTP request of given user, matching UserSessoinObject instance is used to visit user state information and connection to database. So, this UserSessionObject is pretty important. This design brings several problems found later: 1) Since this UserSessionObject is cached in ASP.NET memory space, we have to config load balancer to be sticky connection. That is, HTTP request in single session would always be sent to one web server behind. This limit scalability and maintainability. 2) This UserSessionObject is accessed in every HTTP request. To keep the consistency, there is a exclusive lock for the UserSessionObject. Only one HTTP request can be processed at any given time because it must to obtain the lock first. The performance and response time is affected. Now, I'm wondering whether there is better design to handle such logon user case. It seems Sharing-Nothing-Architecture helps. That means long user info is retrieved from database each time. I'm afraid that would hurt performance. Is there any design pattern for long user web app? Thanks.

    Read the article

  • Protecting sensitive entity data

    - by Andreas
    Hi, I'm looking for some advice on architecture for a client/server solution with some peculiarities. The client is a fairly thick one, leaving the server mostly to peristence, concurrency and infrastructure concerns. The server contains a number of entities which contain both sensitive and public information. Think for example that the entities are persons, assume that social security number and name are sensitive and age is publicly viewable. When starting the client, the user is presented with a number of entities, not disclosing any sensitive information. At any time the user can choose to log in and authenticate against the server, given the authentication is successful the user is granted access to the sensitive information. The client is hosting a domain model and I was thinking of implementing this as some kind of "lazy loading", making the first request instantiating the entities and later refreshing them with sensitive data. The entity getters would throw exceptions on sensitive information when they've not been disclosed, f.e.: class PersonImpl : PersonEntity { private bool undisclosed; public override string SocialSecurityNumber { get { if (undisclosed) throw new UndisclosedDataException(); return base.SocialSecurityNumber; } } } Another more friendly approach could be to have a value object indicating that the value is undisclosed. get { if (undisclosed) return undisclosedValue; return base.SocialSecurityNumber; } Some concerns: What if the user logs in and then out, the sensitive data has been loaded but must be disclosed once again. One could argue that this type of functionality belongs within the domain and not some infrastructural implementation(i.e. repository implementations). As always when dealing with a larger number of properties there's a risk that this type of functionality clutters the code Any insights or discussion is appreciated!

    Read the article

  • Suggestions on writing a TCP IP messaging system (Client/Server) using Delphi 2010

    - by Shane
    I would like to write a messaging system using TCP IP in Delphi 2010. I would like to hear what my best options are for using the standard delphi 2010 components/indy components for doing this. I would like to write a server which does the listening and forwarding of messages to all machines on the network running a client. 1.) a.) clients can send a message to server to be forwarded to all other clients b.) clients listen for messages from other senders (via server) and displays messages. 2.) a.) Server can send a message to all clients b.) Server forwards any messages from clients to all other clients thanks for any suggestions NOTE: I am not writing a instant messaging or chat program. This is merely a system where users can send alerts/messages to other users - they can not reply to each other! NO commercial, shareware, etc links - please! I would like to hear about how you would go about writing this type of system and what approachs you would take, and possibly the TCP IP messaging architecture you would use. Whether it be straight Winows API, Indy components, etc, etc.

    Read the article

  • I am confused about how to use @SessionAttributes

    - by yusaku
    I am trying to understand architecture of Spring MVC. However, I am completely confused by behavior of @SessionAttributes. Please look at SampleController below , it is handling post method by SuperForm class. In fact, just field of SuperForm class is only binding as I expected. However, After I put @SessionAttributes in Controller, handling method is binding as SubAForm. Can anybody explain me what happened in this binding. ------------------------------------------------------- @Controller @SessionAttributes("form") @RequestMapping(value = "/sample") public class SampleController { @RequestMapping(method = RequestMethod.GET) public String getCreateForm(Model model) { model.addAttribute("form", new SubAForm()); return "sample/input"; } @RequestMapping(method = RequestMethod.POST) public String register(@ModelAttribute("form") SuperForm form, Model model) { return "sample/input"; } } ------------------------------------------------------- public class SuperForm { private Long superId; public Long getSuperId() { return superId; } public void setSuperId(Long superId) { this.superId = superId; } } ------------------------------------------------------- public class SubAForm extends SuperForm { private Long subAId; public Long getSubAId() { return subAId; } public void setSubAId(Long subAId) { this.subAId = subAId; } } ------------------------------------------------------- <form:form modelAttribute="form" method="post"> <fieldset> <legend>SUPER FIELD</legend> <p> SUPER ID:<form:input path="superId" /> </p> </fieldset> <fieldset> <legend>SUB A FIELD</legend> <p> SUB A ID:<form:input path="subAId" /> </p> </fieldset> <p> <input type="submit" value="register" /> </p> </form:form>

    Read the article

  • C++ Namespaces & templates question

    - by Kotti
    Hi! I have some functions that can be grouped together, but don't belong to some object / entity and therefore can't be treated as methods. So, basically in this situation I would create a new namespace and put the definitions in a header file, the implementation in cpp file. Also (if needed) I would create an anonymous namespace in that cpp file and put all additional functions that don't have to be exposed / included to my namespace's interface there. See the code below (probably not the best example and could be done better with another program architecture, but I just can't think of a better sample...) Sample code (header) namespace algorithm { void HandleCollision(Object* object1, Object* object2); } Sample code (cpp) #include "header" // Anonymous namespace that wraps // routines that are used inside 'algorithm' methods // but don't have to be exposed namespace { void RefractObject(Object* object1) { // Do something with that object // (...) } } namespace algorithm { void HandleCollision(Object* object1, Object* object2) { if (...) RefractObject(object1); } } So far so good. I guess this is a good way to manage my code, but I don't know what should I do if I have some template-based functions and want to do basically the same. If I'm using templates, I have to put all my code in the header file. Ok, but how should I conceal some implementation details then? Like, I want to hide RefractObject function from my interface, but I can't simply remove it's declaration (just because I have all my code in a header file)... The only approach I came up with was something like: Sample code (header) namespace algorithm { // Is still exposed as a part of interface! namespace impl { template <typename T> void RefractObject(T* object1) { // Do something with that object // (...) } } template <typename T, typename Y> void HandleCollision(T* object1, Y* object2) { impl::RefractObject(object1); // Another stuff } } Any ideas how to make this better in terms of code designing?

    Read the article

  • Refactoring an ASP.NET 2.0 app to be more "modern"

    - by Wayne M
    This is a hypothetical scenario. Let's say you've just been hired at a company with a small development team. The company uses an internal CRM/ERP type system written in .NET 2.0 to manage all of it's day to day things (let's simplify and say customer accounts and records). The app was written a couple of years ago when .NET 2.0 was just out and uses the following architectural designs: Webforms Data layer is a thin wrapper around SqlCommand that calls stored procedures Rudimentary DTO-style business objects that are populated via the sprocs A "business logic" layer that acts as a gateway between the webform and database (i.e. code behind calls that layer) Let's say that as there are more changes and requirements added to the application, you start to feel that the old architecture is showing its age, and changes are increasingly more difficult to make. How would you go about introducing refactoring steps to A) Modernize the app (i.e. proper separation of concerns) and B) Make sure that the app can readily adapt to change in the organization? IMO the changes would involve: Introduce an ORM like Linq to Sql and get rid of the sprocs for CRUD Assuming that you can't just throw out Webforms, introduce the M-V-P pattern to the forms Make sure the gateway classes conform to SRP and the other SOLID principles. Change the logic that is re-used to be web service methods instead of having to reuse code What are your thoughts? Again this is a totally hypothetical scenario that many of us have faced in the past, or may end up facing.

    Read the article

  • How to pass common arguments to Perl modules

    - by Leonard
    I'm not thrilled with the argument-passing architecture I'm evolving for the (many) Perl scripts that have been developed for some scripts that call various Hadoop MapReduce jobs. There are currently 8 scripts (of the form run_something.pl) that are run from cron. (And more on the way ... we expect anywhere from 1 to 3 more for every function we add to hadoop.) Each of these have about 6 identical command-line parameters, and a couple command line parameters that are similar, all specified with Euclid. The implementations are in a dozen .pm modules. Some of which are common, and others of which are unique.... Currently I'm passing the args globally to each module ... Inside run_something.pl I have: set_common_args (%ARGV); set_something_args (%ARGV); And inside Something.pm I have sub set_something_args { (%MYARGS) =@_; } So then I can do if ( $MYARGS{'--needs_more_beer'} ) { $beer++; } I'm seeing that I'm probably going to have additional "common" files that I'll want to pass args to, so I'll have three or four set_xxx_args calls at the top of each run_something.pl, and it just doesn't seem too elegant. On the other hand, it beats passing the whole stupid argument array down the call chain, and choosing and passing individual elements down the call chain is (a) too much work (b) error-prone (c) doesn't buy much. In lots of ways what I'm doing is just object-oriented design without the object-oriented language trappings, and it looks uglier without said trappings, but nonetheless ... Anyone have thoughts or ideas?

    Read the article

  • On the search for my next great .Net Read

    - by user127954
    Just got done with "The art of unit testing". It was a great read and i think everyone should go buy a copy. With that said i think the next book I'm like to read would be a architecture / Design type book that would focus heavily on building your objects / software in such a way that it would be: Low Coupling High Cohesion Easily Maintainable / Extended Easy to test Easy to Navigate / Debug The above characteristcs are the most important ones but also maybe it would also include (but not necessary) designing for: Performance - Don't want to design a system at at the end find out its dog slow :) Scalability - Again don't want to design something at the end find out it won't scale. I'd also prefer (but not necessary again): Something newer - Architectural principles seem to gradually evolve / improve over time and id like something with current thinking. .Net as illustrating language - like i said above its not mandatory but since its what i use every day id prefer it to be in .net. Doesn't really matter if its in vb.net or c# Some of the topics that would be talked about its how to minimize dependencies and using interfaces throughout your solution rather than concrete classes. Maybe it would constract /compare some of the newest design principles like DDD, Repository Pattern, Ect... I already have "Clean Code" (don't know if its this type of book or not) and "Working effectively with legacy code" on my radar but id like to read a book based upon the topic i talked about above first. Is there such a book?

    Read the article

  • Connection Pool Strategy: Good, Bad or Ugly?

    - by Drew
    I'm in charge of developing and maintaining a group of Web Applications that are centered around similar data. The architecture I decided on at the time was that each application would have their own database and web-root application. Each application maintains a connection pool to its own database and a central database for shared data (logins, etc.) A co-worker has been positing that this strategy will not scale because having so many different connection pools will not be scalable and that we should refactor the database so that all of the different applications use a single central database and that any modifications that may be unique to a system will need to be reflected from that one database and then use a single pool powered by Tomcat. He has posited that there is a lot of "meta data" that goes back and forth across the network to maintain a connection pool. My understanding is that with proper tuning to use only as many connections as necessary across the different pools (low volume apps getting less connections, high volume apps getting more, etc.) that the number of pools doesn't matter compared to the number of connections or more formally that the difference in overhead required to maintain 3 pools of 10 connections is negligible compared to 1 pool of 30 connections. The reasoning behind initially breaking the systems into a one-app-one-database design was that there are likely going to be differences between the apps and that each system could make modifications on the schema as needed. Similarly, it eliminated the possibility of system data bleeding through to other apps. Unfortunately there is not strong leadership in the company to make a hard decision. Although my co-worker is backing up his worries only with vagueness, I want to make sure I understand the ramifications of multiple small databases/connections versus one large database/connection pool.

    Read the article

  • Performance Comparison of Shell Scripts vs high level interpreted langs (C#/Java/etc.)

    - by dferraro
    Hi all, First - This is not meant to be a 'which is better, ignorant nonionic war thread'... But rather, I generally need help in making an architecture decision / argument to put forward to my boss. Skipping the details - I simply just would love to know and find the results of anyone who has done some performance comparisons of Shell vs [Insert General Purpose Programming Language (interpreted) here), such as C# or Java... Surprisingly, I have spent some time on Google on searching here to not find any of this data. Has anyone ever done these comparisons, in different use-cases; hitting a database like in a XYX # of loops doing different types of SQL (Oracle pref, but MSSQL would do) queries such as any of the CRUD ops - and also not hitting database and just regular 50k loop type comparison doing different types of calculations, and things of that nature? In particular - for right now, I need to a comparison of hitting an Oracle DB from a shell script vs, lets say C# (again, any GPPL thats interpreted would be fine, even the higher level ones like Python). But I also need to know about standard programming calculations / instructions/etc... Before you ask 'why not just write a quick test yourself? The answer is: I've been a Windows developer my whole life/career and have very limited knowledge of Shell scripting - not to mention *nix as a whole.... So asking the question on here from the more experienced guys would be grealty beneficial, not to mention time saving as we are in near perputual deadline crunch as it is ;). Thanks so much in advance,

    Read the article

  • NSTask Launch causing crash

    - by tripskeet
    Hi, I have an application that can import an XML file through this terminal command : open /path/to/main\ app.app --args myXML.xml This works great with no issues. And i have used Applescript to launch this command through shell and it works just as well. Yet when try using Cocoa's NSTask Launcher using this code : NSTask *task = [[NSTask alloc] init]; [task setLaunchPath:@"/usr/bin/open"]; [task setCurrentDirectoryPath:@"/Applications/MainApp/InstallData/App/"]; [task setArguments:[NSArray arrayWithObjects:[(NSURL *)foundApplicationURL path], @"--args", @"ImportP.xml", nil]]; [task launch]; the applications will start up to the initial screen and then crash when either the next button is clicked or when trying to close the window. Ive tried using NSAppleScript with this : NSAppleScript *script = [[NSAppleScript alloc] initWithSource:@"tell application \"Terminal\" do script \"open /Applications/MainApp/InstallData/App/Main\\\\ App.app\" end tell"]; NSDictionary *errorInfo; [script executeAndReturnError:&errorInfo]; This will launch the program and it will crash as well and i get this error in my Xcode debug window : 12011-01-04 17:41:28.296 LaunchAppFile[4453:a0f] Error loading /Library/ScriptingAdditions/Adobe Unit Types.osax/Contents/MacOS/Adobe Unit Types: dlopen(/Library/ScriptingAdditions/Adobe Unit Types.osax/Contents/MacOS/Adobe Unit Types, 262): no suitable image found. Did find: /Library/ScriptingAdditions/Adobe Unit Types.osax/Contents/MacOS/Adobe Unit Types: no matching architecture in universal wrapper LaunchAppFile: OpenScripting.framework - scripting addition "/Library/ScriptingAdditions/Adobe Unit Types.osax" declares no loadable handlers. So with research i came up with this : NSAppleScript *script = [[NSAppleScript alloc] initWithSource:@"do shell script \"arch -i386 osascript /Applications/MainApp/InstallData/App/test.scpt\""]; NSDictionary *errorInfo; [script executeAndReturnError:&errorInfo]; But this causes the same results as the last command. Any ideas on what causes this crash?

    Read the article

  • Mapping one column in a table to multiple tables

    - by user1721814
    I am working on a new product development creating a WMS system. I have done it in past using ASP, VB, and other techniques where we did not hard code the mapping. But now i am working on it using MVC and entity frame work and i am stumped. How can i map one column in transaction table to a column in multiple tables. I have transaction table trans Transid orderref TType productid qty ....(More Columns) now the orderref will hold either Receiptkey, orderkey , movementkey, adjustmentkey and the TType column will tell me which type of transaction i am dealing with and based on that i would know which table to link further. Now how can i achieve this in Entity Frame work. This is the most important step. I have done this many times in my other languages but now using EF i am stuck. Please help. I have checked a lot online but i have not found it. I am new to MVC and entity frame work architecture. Any guidance would be highly appreciated. Ranjit

    Read the article

  • What is the most common way to use a middleware in node with express and connect

    - by Bernhard
    Thinking about the correct way, how to make use of middlewares in a node.js web project using express and connect which is growing up at the moment. Of course there are middlewares right now wich has to pass or extend requests globally but in a lot of cases there are special jobs like prepare incoming data and in this case the middleware would only work for a set of http-methods and routes. I've a component based architecture and each component brings it's own middleware layer which can implement those for requests this component can handle. On app startup any required component is loaded and prepared. Is it a good idea to bind the middleware code execution to URLs to keep cpu load lower or is it better to use middlewares only for global purposes? Here's some dummy how an url related middleware look like. app.use(function(req, res, next) { // Check if requested route is a part of the current component // or if the middleware should be passed on any request if (APP.controller.groups.Component.isExpectedRoute(req) || APP.controller.groups.Component.getConfig().MIDDLEWARE_PASS_ALL === true) { // Execute the midleware code here console.log('This is a route which should be afected by middleware'); ... next(); }else{ next(); } });

    Read the article

  • What about parallelism across network using multiple PCs?

    - by MainMa
    Parallel computing is used more and more, and new framework features and shortcuts make it easier to use (for example Parallel extensions which are directly available in .NET 4). Now what about the parallelism across network? I mean, an abstraction of everything related to communications, creation of processes on remote machines, etc. Something like, in C#: NetworkParallel.ForEach(myEnumerable, () => { // Computing and/or access to web ressource or local network database here }); I understand that it is very different from the multi-core parallelism. The two most obvious differences would probably be: The fact that such parallel task will be limited to computing, without being able for example to use files stored locally (but why not a database?), or even to use local variables, because it would be rather two distinct applications than two threads of the same application, The very specific implementation, requiring not just a separate thread (which is quite easy), but spanning a process on different machines, then communicating with them over local network. Despite those differences, such parallelism is quite possible, even without speaking about distributed architecture. Do you think it will be implemented in a few years? Do you agree that it enables developers to easily develop extremely powerfull stuff with much less pain? Example: Think about a business application which extracts data from the database, transforms it, and displays statistics. Let's say this application takes ten seconds to load data, twenty seconds to transform data and ten seconds to build charts on a single machine in a company, using all the CPU, whereas ten other machines are used at 5% of CPU most of the time. In a such case, every action may be done in parallel, resulting in probably six to ten seconds for overall process instead of forty.

    Read the article

  • Click behaviour - Difference in IE and FF ?!

    - by OlliD
    Hey folks, i just came to the conclusion that a project i am currently working on might have a "logical" error in functionality. Currently I'am using server technology with PHP/MySQL and JQuery. Within the page there's a normal link reference with tag <a href="contentpage?page=xxx">next step</a> The pain point now seems to be the given jquery click event on the same element. The intension was to save the (current) content of the page (- form elements) via another php script using the php session command. For any reason, IE can handle the click event of Jquery right before executing the standard html command, that reloads the current page again with the new page parameter. By using FF the behaviour is different. I assume, that FF first execute the html command and afterwards execute the javascript code which handles the click event. Therefore the resultset here is wrong respectivly empty. My question now is whether you made the same experience and how you handled / wordarrounded this problem. I'd be thankful fur any of your tips or further feedback. Maybe you also have a solution on how to rethink about the current architecture. Regards, Oliver

    Read the article

  • Capturing stdout from an imported module in wxpython and sending it to a textctrl, without blocking the GUI

    - by splafe
    There are alot of very similar questions to this but I can't find one that applies specifically to what I'm trying to do. I have a simulation (written in SimPy) that I'm writing a GUI for, the main output of the simulation is text - to the console from 'print' statements. Now, I thought the simplest way would be to create a seperate module GUI.py, and import my simulation program into it: import osi_model I want all the print statements to be captured by the GUI and appear inside a Textctrl, which there's countless examples of on here, along these lines: class MyFrame(wx.Frame): def __init__(self, *args, **kwds): <general frame initialisation stuff..> redir=RedirectText(self.txtCtrl_1) sys.stdout=redir class RedirectText: def __init__(self,aWxTextCtrl): self.out=aWxTextCtrl def write(self,string): self.out.WriteText(string) I am also starting my simulation from a 'Go' button: def go_btn_click(self, event): print 'GO' self.RT = threading.Thread(target=osi_model.RunThis()) self.RT.start() This all works fine, and the output from the simulation module is captured by the TextCtrl, except the GUI locks up and becomes unresponsive - I still need it to be accessible (at the very minimum to have a 'Stop' button). I'm not sure if this is a botched attempt at creating a new thread that I've done here, but I assume a new thread will be needed at some stage in this process. People suggest using wx.CallAfter, but I'm not sure how to go about this considering the imported module doesn't know about wx, and also I can't realistically go through the entire simulation architecture and change all the print statements to wx.CallAfter, and any attempt to capture the shell from inside the imported simulation program leads to the program crashing. Does anybody have any ideas about how I can best achieve this? So all I really need is for all console text to be captured by a TextCtrl while the GUI remains responsive, and all text is solely coming from an imported module. (Also, secondary question regarding a Stop button - is it bad form to just kill the simulation thread?). Thanks, Duncan

    Read the article

  • Multiple "ObjectChangeTracker" getting created, can it be avoided?

    - by user555937
    Hi, We are working on a POC where we have following architecture (MVVM), WPF(Client) + WCF + Model(DataAccess)+ ADO.Net Entity Framework 4.0 (with SQL Server 2008 R2 as DB) All are different projects. In the DataAccess layer we have created different Entity Models(edmx) based on the functionality. The tables under perticular flow are grouped and created different entity models. We are using self tracking entities to and fro to communicate with the WPF client through wcf service. For Single model everything works fine. But when we created a Multiple models then few issues started coming. Mutliple models have few duplicate tables/entities. Two probels are, 1) When we try to access entities from different models mutiple objects "ObjectChangeTracker" are getting created. E.g. CompanyModel(edmx) - Company(Entity) - ObjectChangeTracker, ObjectState ProductModel(edmx) - Customer(Entity) - ObjectChangeTracker1, ObjectState1 OrderModel(edmx) - Oder(Entity) - ObjectChangeTracker2, ObjectState2 Is there any way to avoid this? 2) There are few tables which shared across the Models, E.g. Company(Entity) is used in All above mdoels. During compile time it does not thow any error. But run time It gives error saying "Schema specified is not valid. Errors: The mapping of CLR type to EDM type is ambiguous because multiple CLR types match the EDM type "Company"".. To resolve this, we renamed the entities with some prefix to make them Unique. Is there any other way we can resolve this without changing the name of the entity in the same assembly? Thanks in advance and appreciate if anyone has approach for these issues. Thanks, Kiran

    Read the article

< Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >