Search Results

Search found 5302 results on 213 pages for 'sharp architecture'.

Page 191/213 | < Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >

  • Real time embeddable http server library required

    - by Howard May
    Having looked at several available http server libraries I have not yet found what I am looking for and am sure I can't be the first to have this set of requirements. I need a library which presents an API which is 'pipelined'. Pipelining is used to describe an HTTP feature where multiple HTTP requests can be sent across a TCP link at a time without waiting for a response. I want a similar feature on the library API where my application can receive all of those request without having to send a response (I will respond but want the ability to process multiple requests at a time to reduce the impact of internal latency). So the web server library will need to support the following flow 1) HTTP Client transmits http request 1 2) HTTP Client transmits http request 2 ... 3) Web Server Library receives request 1 and passes it to My Web Server App 4) My Web Server App receives request 1 and dispatches it to My System 5) Web Server receives request 2 and passes it to My Web Server App 6) My Web Server App receives request 2 and dispatches it to My System 7) My Web Server App receives response to request 1 from My System and passes it to Web Server 8) Web Server transmits HTTP response 1 to HTTP Client 9) My Web Server App receives response to request 2 from My System and passes it to Web Server 10) Web Server transmits HTTP response 2 to HTTP Client Hopefully this illustrates my requirement. There are two key points to recognise. Responses to the Web Server Library are asynchronous and there may be several HTTP requests passed to My Web Server App with responses outstanding. Additional requirements are Embeddable into an existing 'C' application Small footprint; I don't need all the functionality available in Apache etc. Efficient; will need to support thousands of requests a second Allows asynchronous responses to requests; their is a small latency to responses and given the required request throughput a synchronous architecture is not going to work for me. Support persistent TCP connections Support use with Server-Push Comet connections Open Source / GPL support for HTTPS Portable across linux, windows; preferably more. I will be very grateful for any recommendation Best Regards

    Read the article

  • WPF/MVVM - keep ViewModels in application component or separate?

    - by Anders Juul
    Hi all, I have a WPF application where I'm using the MVVM pattern. I get the VM activated for actions that require user input and thus need to activate views from the VM. I've started out separating the VMs into a separate component/assembly, partly because I see them as the unit testable part, partly because views should rely on VM, not the other way round. But when I then need to bring up a window, the window is not known by the VM. All introductions I find place the VM in the WPF/App component, thus eliminating the problem. This article recommends keeping them in separate layers : http://waf.codeplex.com/wikipage?title=Architecture%20-%20Get%20The%20Big%20Picture&referringTitle=Home As I see it, I have the following choices Move VMs to the WPF/App assembly to allow VMs to access the windows directly. Place interfaces of views in VM-assembly, implement views in WPF/App assembly and register the connection through IOC or alternative ways. File a 'request' from the VM into some mechanism/bus that routes the request (but which mechanism!? E.g something in Prism?!) What's the recommendation? Thanks for any comments, Anders, Denmark

    Read the article

  • Events and references pattern

    - by serhio
    In a project I have the following relation between BO and GUI By e.g. G could represent a graphic with time lines, C a TimeLine curve, P - points of that curve and T the time that represents each point. Each GUI object is associated with the BO corresponding object. When T changes GUI P captures the Changed event and changes its location. So, when G should be modified, it modifies internally its objects and as result T changes, P moves and the GuiG visually changes, everything is OK. But there is an inconvenient of this architecture... BO should not be recreated, because this will breack the link between BO and GUIO. In particular, GUI P should always have the same reference of T. If in a business logic I do by e.g. P1.T = new T(this.T + 10) GUI_P1 will not move anymore, because it wait an event from the reference of former P1.T object, that does not belongs to P1 anymore. So the solution was to always modify the existing objects, not to recreate it. But here is an other inconvenient: performance. Say I have a ready newC object that should replace the older one. Instead of doing G1.C = newC I should do foreach T in foreach P in C replace with T from P from newC. Is there an other more optimal way to do it?

    Read the article

  • How can I use JSONP to download client-side javascript objects?

    - by Alex Mcp
    I'm trying to get client-side javascript objects saved as a file locally. I'm not sure if this is possible. The basic architecture is this: Ping an external API to get back a JSON object Work client-side with that object, and eventually have a "download me" link This link sends the data to my server, which processes it and sends it back with a mime type application/json, which (should) prompt the user to download the file locally. Right now here are my pieces: Server Side Code <?php $data = array('zero', 'one', 'two', 'testing the encoding'); $json = json_encode($data); //$json = json_encode($_GET['']); //eventually I'll encode their data, but I'm testing header("Content-type: application/json"); header('Content-Disposition: attachment; filename="backup.json"'); echo $_GET['callback'] . ' (' . $json . ');'; ?> Relevant Client Side Code $("#download").click(function(){ var json = JSON.stringify(collection); //serializes their object $.ajax({ type: "GET", url: "http://www.myURL.com/api.php?callback=?", //this is the above script dataType: "jsonp", contentType: 'jsonp', data: json, success: function(data){ console.log( "Data Received: " + data[3] ); } }); return false; }); Right now when I visit the api.php site with Firefox, it prompts a download of download.json and that results in this text file, as expected: (["zero","one","two","testing the encoding"]); And when I click #download to run the AJAX call, it logs in Firebug Data Received: testing the encoding which is almost what I'd expect. I'm receiving the JSON string and serializing it, which is great. I'm missing two things: The Actual Questions What do I need to do to get the same prompt-to-download behavior that I get when I visit the page in a browser (much simpler) How do I access, server-side, the json object being sent to the server to serialize it? I don't know what index it is in the GET array (silly, I know, but I've tried almost everything)

    Read the article

  • WCF REST adding data using POST or PUT 400 Bad Request

    - by user55474
    HI How do i add data using wcf rest architecture. I dont want to use the channelfactory to call my method. Something similar to the webrequest and webresponse used for GET. Something similar to the ajax WebServiceProxy restInvoke Or do i always have to use the Webchannelfactory implementation I am getting a 400 BAD request by using the following Dim url As String = "http://localhost:4475/Service.svc/Entity/Add" Dim req As WebRequest = WebRequest.Create(url) req.Method = "POST" req.ContentType = "application/xml; charset=utf-8" req.Timeout = 30000 req.Headers.Add("SOAPAction", url) Dim xEle As XElement xEle = <Entity xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <Name>Entity1</Name> </Entity> Dim sXML As String = xEle .Value req.ContentLength = sXML.Length Dim sw As New System.IO.StreamWriter(req.GetRequestStream()) sw.Write(sXML) sw.Close() Dim res as HttpWebResponse = req.GetResponse() Sercice Contract is as follows <OperationContract()> _ <WebInvoke(Method:="PUT", UriTemplate:="Entity/Add")> _ Function AddEntity(ByVal e1 As Entity) DataContract is as follows <Serializable()> _ <DataContract()> _ Public Class Entity private m_Name as String <DataMember()> _ Public Property Name() As String Get Return m_Name End Get Set(ByVal value As String) m_Name = value End Set End Property End Class thanks

    Read the article

  • Are programming languages and methods inefficient? (assembler and C knowledge needed)

    - by b-gen-jack-o-neill
    Hi, for a long time, I am thinking and studying output of C language compiler in assembler form, as well as CPU architecture. I know this may be silly to you, but it seems to me that something is very ineffective. Please, don´t be angry if I am wrong, and there is some reason I do not see for all these principles. I will be very glad if you tell me why is it designed this way. I actually truly believe I am wrong, I know the genius minds of people which get PCs together knew a reason to do so. What exactly, do you ask? I´ll tell you right away, I use C as a example: 1: Stack local scope memory allocation: So, typical local memory allocation uses stack. Just copy esp to ebp and than allocate all the memory via ebp. OK, I would understand this if you explicitly need allocate RAM by default stack values, but if I do understand it correctly, modern OS use paging as a translation layer between application and physical RAM, when address you desire is further translated before reaching actual RAM byte. So why don´t just say 0x00000000 is int a,0x00000004 is int b and so? And access them just by mov 0x00000000,#10? Because you wont actually access memory blocks 0x00000000 and 0x00000004 but those your OS set the paging tables to. Actually, since memory allocation by ebp and esp use indirect addressing, "my" way would be even faster. 2: Variable allocation duplicity: When you run application, Loader load its code into RAM. When you create variable, or string, compiler generates code that pushes these values on the top o stack when created in main. So there is actual instruction for do so, and that actual number in memory. So, there are 2 entries of the same value in RAM. One in form of instruction, second in form of actual bytes in the RAM. But why? Why not to just when declaring variable count at which memory block it would be, than when used, just insert this memory location?

    Read the article

  • Consolidate multiple site files into single location

    - by seengee
    We have a custom PHP/MySQL CMS running on Linux/Apache thats rolled out to multiple sites (20+) on the same server. Each site uses exactly the same CMS files with a few files for each site being customised. The customised files for each site are: /library/mysql_connect.php /public_html/css/* /public_html/ftparea/* /public_html/images/* There's also a couple of other random files inside /public_html/includes/ that are unique to each site. Other than this each site on the server uses the exact same files. Each site sitting within /home/username/. There is obviously a massive amount of replication here as each time we want to deploy a system update we need to update to each user account. Given the common site files are all stored in SVN it would make far more sense if we were able to simply commit to SVN and deploy to a single location direct from there. Unfortunately, making a major architecture change at this stage could be problematic. In my mind the ideal scenario would mean creating an account like /home/commonfiles/ and each site using these common files unless an account specific file exists, for example a request is made to /home/user/public_html/index.php but as this file doesnt exist the request is then redirected to /home/commonfiles/public_html/index.php. I know that generally this approach is possible, similar to how Zend Framework (and probably others) redirect all requests that dont match a specific file to index.php. I'm just not sure about how exactly to go about implementing it and whether its actually advisable. Would really welcome any input/ideas people have got. Thanks.

    Read the article

  • Seperation of business logic

    - by bruno
    Dear all, When I was optimizing my architecture of our applications in our website, I came to a problem that I don't know the best solution for. Now at the moment we have a small dll based on this structure: Database <-> DAL <-> BLL the Dal uses Business Objects to pass to the BLL that will pass it to the applications that uses this dll. Only the BLL is public so any application that includes this dll, can see the bll. In the beginning, this was a good solution for our company. But when we are adding more and more applications on that Dll, the bigger the Bll is getting. Now we dont want that some applications can see Bll-logic from other applications. Now I don't know what the best solution is for that. The first thing I thought was, move and seperate the bll to other dll's which i can include in my application. But then must the Dal be public, so the other dll's can get the data... and that I seems like a good solution. My other soluition, is just to seperate the bll in different namespaces, and just include only the namespaces you need in the applications. But in this solution, you can get directly access to other bll's if you want. So i'm asking for your oppinions. Thx, Bruno

    Read the article

  • 32/64 bit problems with Eclipse CDT on Ubuntu

    - by waffleShirt
    I have just recently started running Linux on my PC and I am trying to start learning OpenGL. I am using the latest version of Eclipse CDT as my IDE, and my system is Ubuntu 10.10, 64 bit version. The problem I am having is that whenever I try to run a build from within the IDE I get the error message "Launch Failed. Binary Not Found." Ive done a lot of looking around on the internet but I still cant solve the problem. I know for a fact that the binary is built, it can be run from a terminal window. According to posts I have seen the problem is that Eclipse tries to run a 32 bit binary, but GCC 4.4.5 defaults to 64 bit binaries on a 64 bit system. Ive seen a lot of information about using the -m32 flag in makefiles, but then I still get the following output in Eclipse: make all g++ -o HelloWorld2 main.o /usr/bin/ld: i386 architecture of input file `main.o' is incompatible with i386:x86-64 output /usr/bin/ld: final link failed: Invalid operation collect2: ld returned 1 exit status make: *** [HelloWorld2] Error 1 What I would like to know is how to either get Eclipse to launch the 64 bit binaries, or have Eclipse correctly compile 32 bit binaries.

    Read the article

  • Is the REST support in Spring 3's MVC Framework production quality yet?

    - by glenjohnson
    Hi all, Since Spring 3 was released in December last year, I have been trying out the new REST features in the MVC framework for a small commercial project involving implementing a few RESTful Web Services which consume XML and return XML views using JiBX. I plan to use either Hibernate or JDBC Templates for the data persistence. As a Spring 2.0 developer, I have found Spring 3's (and 2.5's) new annotations way of doing things quite a paradigm shift and have personally found some of the new MVC annotation features difficult to get up to speed with for non-trivial applications - as such, I am often having to dig for information in forums and blogs that is not apparent from going through the reference guide or from the various Spring 3 REST examples on the web. For deadline-driven production quality and mission critical applications implementing a RESTful architecture, should I be holding off from Spring 3 and rather be using mature JSR 311 (JAX-RS) compliant frameworks like RESTlet or Jersey for the REST layer of my code (together with Spring 2 / 2.5 to tie things together)? I had no problems using RESTlet 1.x in a previous project and it was quite easy to get up to speed with (no magic tricks behind the scenes), but when starting my current project it initially looked like the new REST stuff in Spring 3's MVC Framework would make life easier. Do any of you out there have any advice to give on this? Does anyone know of any commercial / production-quality projects using, or having successfully delivered with, the new REST stuff in Spring 3's MVC Framework. Many thanks Glen

    Read the article

  • Am I a discoverer of a bug in the WPF engine?

    - by bitbonk
    We have a MFC 8 application compiled with /CLR that contains a larger amount of Windows Forms UserControls wich again contain WPF user controls using ElementHost. Due to the architecture of our software we can not use HwndHost directly. We observed an extremely strange behavior here that we can not make any sense of: When the CPU load is very high during startup of the application and there are a lot live of ElementHost instances, the whole property engine completely stops working. For example animations that usually just work fine now never update the values of the bound properties, they just stay at some random value after startup. When I set a property that is not bound to anything the value is correctly stored in the dependency property (calling the getter returns the new value) but the visual representation never reflects that. I set the background to red but the background color does not change. We tested this on a lot of different machines all running Windows XP SP2 and it is pretty reproducible. The funny thing here is, that there is in fact one situation where the bound properties actually pickup a new value from the animation and the visual gets updated based on the property values. It is when I resize the ElementHost or when I hide and reshow the parent native control. As soon as I do this, properties that are bound to an animation pickup a new value and the visuals rerender based on the new property values - but just once - if I want to see another update I have to resize the ElementHost. Do you have any explanation of what could be happening here or how I could approach this problem to find it out? What can I do to debug this? Is there a way I can get more information about what WPF actually does or where WPF might have crashed? To me it currently seems like a bug in WPF itself since it only happens at high CPU load at startup.

    Read the article

  • Route WCF ServiceHost to another computer

    - by I2nfo
    GoodDay, I'm not a guru when it comes to WCF, but i do know the basics. My question is, how do i create a ServiceHost on machine X, while the code is on machine Y? if i build and run this code on my dev machine(localhost) : servicehost = new ServiceHost(typeof(MyService1)); servicehost.AddServiceEndpoint(typeof(IMyService1), new NetTcpBinding(),"net.tcp://my.datacenter.com/MyApp/MyService1"); //This is normally set to localhost. What implementation must be done on the datacenter server, so that if i had to point to http://my.datacenter.com/MyApp/MyService1 , it will route the service operation to my dev machine (localhost). However, the datacenter should not be accessible via the internet. It is a possible infrastructure that we researching to see if we can create a service bus type architecture so that all our customers can invoke other customer services running on their respective machines just by calling our datacenter url. We have looked at Windows Azure, but we have our own datacenter infrasture that we wish to leverage off. Come think of it, we kind of building our own Azure, on a very very basic scale. How does one go creating this? Thanks in Advance

    Read the article

  • jboss cache as hibernate 2nd level - cluster node doesn't persist replicated data

    - by Sergey Grashchenko
    I'm trying to build an architecture basically described in user guide http://www.jboss.org/file-access/default/members/jbosscache/freezone/docs/3.2.1.GA/userguide_en/html/cache_loaders.html#d0e3090 (Replicated caches with each cache having its own store.) but having jboss cache configured as hibernate second level cache. I've read manual for several days and played with the settings but could not achieve the result - the data in memory (jboss cache) gets replicated across the hosts, but it's not persisted in the datasource/database of the target (not original) cluster host. I had a hope that a node might become persistent at eviction, so I've got a cache listener and attached it to @NoveEvicted event. I found that though I could adjust eviction policy to fully control it, no any persistence takes place. Then I had a though that I could try to modify CacheLoader to set "passivate" to true, but I found that in my case (hibernate 2nd level cache) I don't have a way to access a loader. I wonder if replicated data persistence is possible at all by configuration tuning ? If not, will it work for me to create some manual peristence in CacheListener (I could check whether the eviction event is local, and if not - persist it to hibernate datasource somehow) ? I've used mvcc-entity configuration with the modification of cacheMode - set to REPL_ASYNC. I've also played with the eviction policy configuration. Last thing to mention is that I've tested entty persistence and replication in project that has been generated with Seam. I guess it's not important though.

    Read the article

  • What should be taught in a "Fundamentals of programming" course at university?

    - by Dervin Thunk
    I have started a new question (see here), because I think the topic is of importance in a more general form. The question is now: If you were a professor at a Computer Science Dept. in some university, what would make it into your course? This is a programming course, second term, first year computer science/computer engineering. Remember you have a limited amount of time, and students are of different levels of competence, and some may be scientists, but some will also go on to be programmers in companies of different kinds. You have to cater to all. Bonus: What language? (Although see this question for my current thoughts about this...) Maybe you want to attach a course outline from some university? See here for an even more general question about this. Answer: I can't really summarize this post... I guess it was too subjective. However, it looks like we have to cover the history of computing up to a certain extent, computer architecture (memory, registers, whatever), C, and finally some basic algos and data structures in a problem solving fashion. This will be the bare bones of the course. Thanks all. I will accept the most voted up answer to close the thread, as it should be done.

    Read the article

  • Creating a tar file with checksums included

    - by wazoox
    Here's my problem : I need to archive to tar files a lot ( up to 60 TB) of big files (usually 30 to 40 GB each). I would like to make checksums ( md5, sha1, whatever) of these files before archiving; however not reading every file twice (once for checksumming, twice for tar'ing) is more or less a necessity to achieve a very high archiving performance (LTO-4 wants 120 MB/s sustained, and the backup window is limited). So I'd need some way to read a file, feeding a checksumming tool on one side, and building a tar to tape on the other side, something along : tar cf - files | tee tarfile.tar | md5sum - Except that I don't want the checksum of the whole archive (this sample shell code does just this) but a checksum for each individual file in the archive. I've studied GNU tar, Pax, Star options. I've looked at the source from Archive::Tar. I see no obvious way to achieve this. It looks like I'll have to hand-build something in C or similar to achieve what I need. Perl/Python/etc simply won't cut it performance-wise, and the various tar programs miss the necessary "plugin architecture". Does anyone know of any existing solution to this before I start code-churning ?

    Read the article

  • Keeping sync in multiplayer RTS game that uses floating point arithmetic

    - by Calmarius
    I'm writing a 2D space RTS game in C#. Single player works. Now I want to add some multiplayer functionality. I googled for it and it seems there is only one way to have thousands of units continuously moving without a powerful net connection: send only the commands through the network while running the same simulation at every player. And now there is a problem the entire engine uses doubles everywhere. And floating point calculations are depends heavily on compiler optimalizations and cpu architecture so it is very hard to keep things syncronized. And it is not grid based at all, and have a simple phisics engine to move the space-ships (space ships have impulse and angular-momentum...). So recoding the entire stuff to use fixed point would be quite cumbersome (but probably the only solution). So I have 2 options so far: Say bye to the current code and restart from scratch using integers Make the game LAN only where there is enough bandwidth to have 8 players with thousands of units and sending the positions and orientation etc in (almost) every frame... So I looking for better opinions, (or even tips on migrating the code to fixed-point without messing everything up...)

    Read the article

  • Trying to make a plugin system in C++/Qt

    - by Pirate for Profit
    I'm making a task-based program that needs to have plugins. Tasks need to have properties which can be easily edited, I think this can be done with Qt's Meta-Object Compiler reflection capabilities (I could be wrong, but I should be able to stick this in a QtPropertyBrowser?) So here's the base: class Task : public QObject { Q_OBJECT public: explicit Task(QObject *parent = 0) : QObject(parent){} virtual void run() = 0; signals: void taskFinished(bool success = true); } Then a plugin might have this task: class PrinterTask : public Task { Q_OBJECT public: explicit PrinterTask(QObject *parent = 0) : Task(parent) {} void run() { Printer::getInstance()->Print(this->getData()); // fictional emit taskFinished(true); } inline const QString &getData() const; inline void setData(QString data); Q_PROPERTY(QString data READ getData WRITE setData) // for reflection } In a nutshell, here's what I want to do: // load plugin // find all the Tasks interface implementations in it // have user able to choose a Task and edit its specific Q_PROPERTY's // run the TASK It's important that one .dll has multiple tasks, because I want them to be associated by their module. For instance, "FileTasks.dll" could have tasks for deleting files, making files, etc. The only problem with Qt's plugin setup is I want to store X amount of Tasks in one .dll module. As far as I can tell, you can only load one interface per plugin (I could be wrong?). If so, the only possible way to do accomplish what I want is to create a FactoryInterface with string based keys which return the objects (as in Qt's Plug-And-Paint example), which is a terrible boilerplate that I would like to avoid. Anyone know a cleaner C++ plugin architecture than Qt's to do what I want? Also, am I safely assuming Qt's reflection capabilities will do what I want (i.e. able to edit an unknown dynamically loaded tasks' properties with the QtPropertyBrowser before dispatching)?

    Read the article

  • Componentizing complex functionality in an MVC web app

    - by NXT
    Hi Everyone, This is question about MVC web-app architecture, and how it can be extended to handle componentizing moderately complex units of functionality. I have an MVC style web-app with a customer facing credit card charge page. I've been asked to allow the admins to enter credit card payments as well, for times when credit cards are taken over the phone. The customer facing credit card charge section of the website is currently it's own controller, with approximately 3 pages and a login. That controller is responsible for: Customer login credential authentication Credit card data collection Calling a library to do the actual charge. reporting the results to the user. I would like to extract the card data collection pages into a component of some kind so that I can easily reuse the code on the admin side of the app. Right now my components are limited to single "view" pages with PHP style embedded Perl code. This is a simple, custom MVC framework written in Perl. Right now, controllers are called directly from the framework to service web requests. My idea is to allow controllers to be called from other controllers, so that I can componentize more complex functionality. For simplicity I think I prefer composition over inheritance, even though it will require writing a bunch of pass-through methods (actions). Being Perl, I could in theory do multiple inheritance. I'm wondering if anyone with experience in other MVC web frameworks can comment on how this sort of thing is usually done. Thank you.

    Read the article

  • Handling Types Defined in Plug-ins That Are No Longer Available

    - by Chris
    I am developing a .NET framework application that allows users to maintain and save "projects". A project can consist of components whose types are defined in the assemblies of the framework itself and/or in third-party assemblies that will be made available to the framework via a yet-to-be-built plug-in architecture. When a project is saved, it is simply binary-serialised to file. Projects are portable, so multiple users can load the same project into their own instances of the framework (just as different users may open the same MSWord document in their own local copies of MSWord). What's more, the plug-ins available to one user's framework might not be available to that of another. I need some way of ensuring that when a user attempts to open (i.e. deserialise) a project that includes a type whose defining assembly cannot be found (either because of a framework version incompatibility or the absence of a plug-in), the project still opens but the offending type is somehow substituted or omitted. Trouble is, the research I've done to date does not even hint at a suitable approach. Any ideas would be much appreciated, thanks.

    Read the article

  • C++ & proper TDD

    - by Kotti
    Hi! I recently tried developing a small-sized project in C# and during the whole project our team used the Test-Driven-Development (TDD) technique (xunit, moq). I really think this was awesome, because (when paired with C#) this approach allowed to relax when coding, relax when projecting and relax when refactoring. I suspect that all this TDD-stuff actually simplifies the coding process and, well, it allowed (eventually, for me) to get the same result with fewer brain cells working. Right after that I tried using TDD paired with C++ (I used Google Test and Google Mock libraries), and, I don't know why but I actually think that TDD here was a step back in terms of rapid application development. I had some moments when I had to spend huge amounts of time thinking of my tests, building proper mocks, rebuilding them and swearing at my monitor. And, well, I obviously can't ask something like "what I did wrong?" or "what was wrong in my approach?", because I don't know what to describe. But if there are any people who are used to TDD in C++ (and, probably C#) too, could you please advise me how to do this properly. Framework recommendations, architecture approaches, plain coding advices - if you are experienced in TDD & C++, please respond.

    Read the article

  • Android NDK import-module / code reuse

    - by Graeme
    Morning! I've created a small NDK project which allows dynamic serialisation of objects between Java and C++ through JNI. The logic works like this: Bean - JavaCInterface.Java - JavaCInterface.cpp - JavaCInterface.java - Bean The problem is I want to use this functionality in other projects. I separated out the test code from the project and created a "Tester" project. The tester project sends a Java object through to C++ which then echo's it back to the Java layer. I thought linking would be pretty simple - ("Simple" in terms of NDK/JNI is usually a day of frustration) I added the JNIBridge project as a source project and including the following lines to Android.mk: NDK_MODULE_PATH=.../JNIBridge/jni/" JNIBridge/jni/JavaCInterface/Android.mk: ... include $(BUILD_STATIC_LIBRARY) JNITester/jni/Android.mk: ... include $(BUILD_SHARED_LIBRARY) $(call import-module, JavaCInterface) This all works fine. The C++ files which rely on headers from JavaCInterface module work fine. Also the Java classes can happily use interfaces from JNIBridge project. All the linking is happy. Unfortunately JavaCInterface.java which contains the native method calls cannot see the JNI method located in the static library. (Logically they are in the same project but both are imported into the project where you wish to use them through the above mechanism). My current solutions are are follows. I'm hoping someone can suggest something that will preserve the modular nature of what I'm trying to achieve: My current solution would be to include the JavaCInterface cpp files in the calling project like so: LOCAL_SRC_FILES := FunctionTable.cpp $(PATH_TO_SHARED_PROJECT)/JavaCInterface.cpp But I'd rather not do this as it would lead to me needing to update each depending project if I changed the JavaCInterface architecture. I could create a new set of JNI method signatures in each local project which then link to the imported modules. Again, this binds the implementations too tightly.

    Read the article

  • Is there any way to provide custom factory for .Net Framework creation Entities from EF4 ?

    - by ILICH
    There are a lot of posts about how cool POCO objects are and how Entity Framework 4 supports them. I decided to try it out with domain driven development oriented architecture and finished with domain entities that has dependencies from services. So far so good. Imagine my Products are POCO objects. When i query for objects like this: NorthwindContext db = new NorthwindContext(); var products = db.Products.ToList(); EF creates instances of products for me. Now I want to inject dependencies in my POCO objects (products) The only way I see is make some method within NorthwindContext that makes something like pseudo-code below: public List<Product> GetProducts(){ var products = database.Products.ToList(); container.BuildUp(products); //inject dependencies return products; } But what if i want to make my repository to be more flexible like this: public ObjectSet<Product> GetProducts() { ... } So, I really need a factory to make it more lazy and linq friendly. Please help !

    Read the article

  • My QFileSystemModel doesn't work as expected in PyQt

    - by Skilldrick
    I'm learning the Qt Model/View architecture at the moment, and I've found something that doesn't work as I'd expect it to. I've got the following code (adapted from Qt Model Classes): from PyQt4 import QtCore, QtGui model = QtGui.QFileSystemModel() parentIndex = model.index(QtCore.QDir.currentPath()) print model.isDir(parentIndex) #prints True print model.data(parentIndex).toString() #prints name of current directory childIndex = model.index(0, 0, parentIndex) print model.data(childIndex).toString() rows = model.rowCount(parentIndex) print rows #prints 0 (even though the current directory has directory and file children) The question: Is this a problem with PyQt, have I just done something wrong, or am I completely misunderstanding QFileSystemModel? According to the documentation, model.rowCount(parentIndex) should return the number of children in the current directory. The QFileSystemModel docs say that it needs an instance of a Gui application, so I've also placed the above code in a QWidget as follows, but with the same result: import sys from PyQt4 import QtCore, QtGui class Widget(QtGui.QWidget): def __init__(self, parent=None): QtGui.QWidget.__init__(self, parent) model = QtGui.QFileSystemModel() parentIndex = model.index(QtCore.QDir.currentPath()) print model.isDir(parentIndex) print model.data(parentIndex).toString() childIndex = model.index(0, 0, parentIndex) print model.data(childIndex).toString() rows = model.rowCount(parentIndex) print rows def main(): app = QtGui.QApplication(sys.argv) widget = Widget() widget.show() sys.exit(app.exec_()) if __name__ == '__main__': main()

    Read the article

  • Objective-C properties are not being recognized in header file?

    - by Greg
    Hey folks, I wonder if I'm doing something completely stupid here... I'm clearly missing something. I've gotten used to the pattern of defining properties of a custom class, however I seem to be hitting a point where extended classes do not recognize new properties. Case of point, here's my header file: import import "MyTableViewController.h" @interface MyRootController : MyTableViewController { NSMutableArray *sectionList; } @property (nonatomic, retain) NSMutableArray *sectionList; @end Now, for some reason that "sectionList" property is not turning green within my interface file (ie: it's not being recognized as custom property it seems). As a result, I'm getting all kinds of errors down in my implementation. The first is right at the top of my implementation where I try to synthesize the property: import "MyRootController.h" @implementation MyRootController @synthesize sectionList; That synthesize line throws the error "No declaration of property 'sectionList' found in the interface". So, this is really confusing. I'm clearly doing something wrong, although I can't put my finger on what. One thought: I am extending another custom class of my own. Do I need to specify some kind of super-class declaration to keep the architecture from getting sealed one level up? Thanks!

    Read the article

  • Upgraded to Xcode 4 -- Endless stream of duplicate symbol errors causing build errors

    - by D-Nice
    Everything was working perfectly fine in Xcode 3 yesterday before I upgraded. So I completed the upgrade, restarted my computer, and opened my old project. I had to reconfigure a few settings like the header paths so that I could begin to compile. I'm using AdWhirl for ad mediation, and at this point my errors begin to read something like duplicate symbol _OBJC_METACLASS_$_SBJSON in /Users/Admin/Desktop/TMapLiteAdwhirl/AdWhirl/MMSDK/libMMSDK.a(SBJSON.o) and /Users/Admin/Library/Developer/Xcode/DerivedData/TruxMapLite-bgpylibztethnlhkfkdumpvrjvgy/Build/Intermediates/TruxMapLite.build/Debug-iphoneos/TruxMapLite.build/Objects-normal/armv6/SBJSON.o for architecture armv6 The library it's referring to is the SDK for one of the ad networks I'm including in AdWhirl. Both of the 'duplicate symbols' refer to the SAME FILE, but they use different paths. If I had still had XCode 3, I would simply try excluding these libraries from the build path, but I have no idea how that can be done in Xcode 4. I've tried everything all the way down to deleting the library and all associated files from my project, but when I do this, i will simply get the same type of error for a different library in the AdWhirl directory. This is incredibly frustrating because before my upgrade everything was working smoothly and I was prepared to submit my binary. If anyone has any advice, id be more than happy to give it a try. Thanks!

    Read the article

< Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >