Search Results

Search found 3077 results on 124 pages for 'boost serialization'.

Page 109/124 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • C++ Returning Multiple Items

    - by Travis Parks
    I am designing a class in C++ that extracts URLs from an HTML page. I am using Boost's Regex library to do the heavy lifting for me. I started designing a class and realized that I didn't want to tie down how the URLs are stored. One option would be to accept a std::vector<Url> by reference and just call push_back on it. I'd like to avoid forcing consumers of my class to use std::vector. So, I created a member template that took a destination iterator. It looks like this: template <typename TForwardIterator, typename TOutputIterator> TOutputIterator UrlExtractor::get_urls( TForwardIterator begin, TForwardIterator end, TOutputIterator dest); I feel like I am overcomplicating things. I like to write fairly generic code in C++, and I struggle to lock down my interfaces. But then I get into these predicaments where I am trying to templatize everything. At this point, someone reading the code doesn't realize that TForwardIterator is iterating over a std::string. In my particular situation, I am wondering if being this generic is a good thing. At what point do you start making code more explicit? Is there a standard approach to getting values out of a function generically?

    Read the article

  • Thread safe lazy contruction of a singleton in C++

    - by pauldoo
    Is there a way to implement a singleton object in C++ that is: Lazily constructed in a thread safe manner (two threads might simultaneously be the first user of the singleton - it should still only be constructed once). Doesn't rely on static variables being constructed beforehand (so the singleton object is itself safe to use during the construction of static variables). (I don't know my C++ well enough, but is it the case that integral and constant static variables are initialized before any code is executed (ie, even before static constructors are executed - their values may already be "initialized" in the program image)? If so - perhaps this can be exploited to implement a singleton mutex - which can in turn be used to guard the creation of the real singleton..) Excellent, it seems that I have a couple of good answers now (shame I can't mark 2 or 3 as being the answer). There appears to be two broad solutions: Use static initialisation (as opposed to dynamic initialisation) of a POD static varible, and implementing my own mutex with that using the builtin atomic instructions. This was the type of solution I was hinting at in my question, and I believe I knew already. Use some other library function like pthread_once or boost::call_once. These I certainly didn't know about - and am very grateful for the answers posted.

    Read the article

  • PHP (CodeIgniter) Pass Object Through Session

    - by FranticPedantic
    I am using PHP5 and CodeIgniter and I am trying to implement a single-sign on feature with facebook (although I don't think that facebook is relevant to the question). I am somewhat of a novice with PHP and definitely one with CodeIgniter, so if you think my approach is just completely off telling me that would be helpful too. So here is in short what I am doing: //Controller 1 $this->load->plugin("facebook"); $facebook = new Facebook(array ( 'appId' => $fbconfig['appid'], 'secret' => $fbconfig['secret'], 'cookie' => true, ) ); $fbsession = $facebook->getSession(); //works fine $this->session->set_userdata('facebook', serialize($facebook); Now I would like to grab that facebook object in a different controller. //Controller 2 $facebook = unserialize($this->session->userdata('facebook')); $fbsession = $facebook->getSession(); Produces the error: Call to undefined method getSession. So I look up more about serialization and think that maybe it just doesn't know what the facebook object's attributes are. So I add in a $this->load->plugin('facebook'); To controller 2 as well and I get a "Cannot redeclare class facebook." I am strongly suspecting that I am misunderstanding sessions here. Do I have to somehow tell PHP what kind of object it is? Thanks for the help.

    Read the article

  • non-copyable objects and value initialization: g++ vs msvc

    - by R Samuel Klatchko
    I'm seeing some different behavior between g++ and msvc around value initializing non-copyable objects. Consider a class that is non-copyable: class noncopyable_base { public: noncopyable_base() {} private: noncopyable_base(const noncopyable_base &); noncopyable_base &operator=(const noncopyable_base &); }; class noncopyable : private noncopyable_base { public: noncopyable() : x_(0) {} noncopyable(int x) : x_(x) {} private: int x_; }; and a template that uses value initialization so that the value will get a known value even when the type is POD: template <class T> void doit() { T t = T(); ... } and trying to use those together: doit<noncopyable>(); This works fine on msvc as of VC++ 9.0 but fails on every version of g++ I tested this with (including version 4.5.0) because the copy constructor is private. Two questions: Which behavior is standards compliant? Any suggestion of how to work around this in gcc (and to be clear, changing that to T t; is not an acceptable solution as this breaks POD types). P.S. I see the same problem with boost::noncopyable.

    Read the article

  • How do I setup Linq to SQL and WCF

    - by Jisaak
    So I'm venturing out into the world of Linq and WCF web services and I can't seem to make the magic happen. I have a VERY basic WCF web service going and I can get my old SqlConnection calls to work and return a DataSet. But I can't/don't know how to get the Linq to SQL queries to work. I'm guessing it might be a permissions problem since I need to connect to the SQL Database with a specific set of credentials but I don't know how I can test if that is the issue. I've tried using both of these connection strings and neither seem to give me a different result. <add name="GeoDataConnectionString" connectionString="Data Source=SQLSERVER;Initial Catalog=GeoData;Integrated Security=True" providerName="System.Data.SqlClient" /> <add name="GeoDataConnectionString" connectionString="Data Source=SQLSERVER;Initial Catalog=GeoData;User ID=domain\userName; Password=blahblah; Trusted_Connection=true" providerName="System.Data.SqlClient" /> Here is the function in my service that does the query and I have the interface add the [OperationContract] public string GetCity(int cityId) { GeoDataContext db = new GeoDataContext(); var city = from c in db.Cities where c.CITY_ID == 30429 select c.DESCRIPTION; return city.ToString(); } The GeoData.dbml only has one simple table in it with a list of city id's and city names. I have also changed the "Serialization Mode" on the DataContext to "Unidirectional" which from what I've read needs to be done for WCF. When I run the service I get this as the return: SELECT [t0].[DESCRIPTION] FROM [dbo].[Cities] AS [t0] WHERE [t0].[CITY_ID] = @p0 Dang, so as I'm writing this I realize that maybe my query is all messed up?

    Read the article

  • How do I create a dynamic data transfer object dynamically from ADO.net model

    - by Richard
    I have a pretty simple database with 5 tables, PK's and relationships setup, etc. I also have an ASP.net MVC3 project I'm using to create simple web services to feed JSON/XML to a mobile app using post/get. To access my data I'm using an ADO.net entity model class to handle generation of the entities, etc. Due to issues with serialization/circular references created by the auto-generated relations from ADO.net entity model, I've been forced to create "Data transfer objects" to strip out the relations and data that doesn't need to be transferred. Question 1: is there an easier way to create DTOs using the entity framework itself? IE, specify only the entity properties I want to convert to Jsonresults? I don't wish to use any 3rd party frameworks if I can help it. Question 2: A side question for Entity Framework, say I create an ADO.net entity model in one project within a solution. Because that model relies on the connection to the database specified in project A, can project B somehow use that model with a similar connection? Both projects are in the same solution. Thanks!

    Read the article

  • Messages not forwarded to error queue when exception is thrown in handler (it works on my machine)

    - by darthjit
    e are using NServicebus 4.0.5 with sql server(sql server 2012) as transport. When the handler throws an exception, NSB does not retry or move the message to the error queue. Successful messages make it to the audit queue but the failed/errored ones don't! . Interestingly, all this works on our local machines(windows 7 ,sql server localdb) but not on windows server 2012 (sql server 2012). Here is the config info on the subscriber: <add name="NServiceBus/Transport" connectionString="Data Source=xxx;Initial Catalog=NServiceBus;Integrated Security=SSPI;Enlist=false;" /> <add name="NServiceBus/Persistence" connectionString="Data Source=xxx;Initial Catalog=NServiceBus;Integrated Security=SSPI;Enlist=false;" /> <MessageForwardingInCaseOfFaultConfig ErrorQueue="error" /> <UnicastBusConfig ForwardReceivedMessagesTo="audit"> <MessageEndpointMappings> <add Assembly="Services.Section.Messages" Endpoint= "Services.ACL.Worker" /> </MessageEndpointMappings> </UnicastBusConfig> And in code it is configured as follows: public class EndpointConfig : IConfigureThisEndpoint, AsA_Server, IWantCustomInitialization { public void Init() { IContainer container = ContainerInstanceProvider. GetContainerInstance(); Configure .Transactions.Enable(); Configure.With() .AutofacBuilder(container) .UseTransport<SqlServer>() .Log4Net() //.Serialization.Json() .UseNHibernateSubscriptionPersister() .UseNHibernateTimeoutPersister() .MessageForwardingInCaseOfFault() .RijndaelEncryptionService() .DefiningCommandsAs(type => type.Namespace != null &&type .Namespace.EndsWith("Commands")) .DefiningEventsAs(type => type.Namespace != null &&type .Namespace.EndsWith("Events")) .UnicastBus(); } } Any ideas on how to fix this? here is the log info (there is a lot there, search for error to see the relevant parts) https://gist.github.com/ranji/7378249

    Read the article

  • lambda traits inconsistency across C++0x compilers

    - by Sumant
    I observed some inconsistency between two compilers (g++ 4.5, VS2010 RC) in the way they match lambdas with partial specializations of class templates. I was trying to implement something like boost::function_types for lambdas to extract type traits. Check this for more details. In g++ 4.5, the type of the operator() of a lambda appears to be like that of a free standing function (R (*)(...)) whereas in VS2010 RC, it appears to be like that of a member function (R (C::*)(...)). So the question is are compiler writers free to interpret any way they want? If not, which compiler is correct? See the details below. template <typename T> struct function_traits : function_traits<decltype(&T::operator())> { // This generic template is instantiated on both the compilers as expected. }; template <typename R, typename C> struct function_traits<R (C::*)() const> { // inherits from this one on VS2010 RC typedef R result_type; }; template <typename R> struct function_traits<R (*)()> { // // inherits from this one g++ 4.5 typedef R result_type; }; int main(void) { auto lambda = []{}; function_traits<decltype(lambda)>::result_type *r; // void * } This program compiles on both g++ 4.5 and VS2010 but the function_traits that are instantiated are different as noted in the code.

    Read the article

  • Why Does This Maintainability Index Increase?

    - by Timothy
    I would be appreciative if someone could explain to me the difference between the following two pieces of code in terms of Visual Studio's Code Metrics rules. Why does the Maintainability Index increase slightly if I don't encapsulate everything within using ( )? Sample 1 (MI score of 71) public static String Sha1(String plainText) { using (SHA1Managed sha1 = new SHA1Managed()) { Byte[] text = Encoding.Unicode.GetBytes(plainText); Byte[] hashBytes = sha1.ComputeHash(text); return Convert.ToBase64String(hashBytes); } } Sample 2 (MI score of 73) public static String Sha1(String plainText) { Byte[] text, hashBytes; using (SHA1Managed sha1 = new SHA1Managed()) { text = Encoding.Unicode.GetBytes(plainText); hashBytes = sha1.ComputeHash(text); } return Convert.ToBase64String(hashBytes); } I understand metrics are meaningless outside of a broader context and understanding, and programmers should exercise discretion. While I could boost the score up to 76 with return Convert.ToBase64String(sha1.ComputeHash(Encoding.Unicode.GetBytes(plainText))), I shouldn't. I would clearly be just playing with numbers and it isn't truly any more readable or maintainable at that point. I am curious though as to what the logic might be behind the increase in this case. It's obviously not line-count.

    Read the article

  • What MS technology to use for HTTP service returning XML?

    - by Borek
    I need to create a service that: accepts HTTP requests (with query string or HTTP POST parameters) does some processing on the requests (checking if the request is valid, authentication etc.) reads data from a custom store (another HTTP call in our case) returns the result as custom XML (defined with XSD) I'm trying to think of various MS technologies that could help me and how good they would be for this scenario (pretty standard one I guess). The tasks above are relatively separate, this is what comes to mind: HTTP front-end: ASP.NET Web Forms ASP.NET MVC (seems more appropriate here as I won't need server controls, view state etc.) WCF? Don't know much about it or how well it would suit my task. Custom logic on the server: this will probably be a generic C# code in all cases (sometimes "plugged into" or called from MVC controllers or some equivalent place in other technologies) Reading data from internal data stores: As said, this is another HTTP server in our case. Options that come to mind: Just read the data using something like WebClient (Just theoretically) implement a LINQ provider (Just even more theoretically) implement an EF provider Output the data as custom XML: Linq2XML Serialization? Is it flexible enough? Does WCF provide some tools for this? Some "OXM" - Object/XML mapper if there is something like that for .NET I may be wrong in many of my assumptions, this is just a quick list that comes to mind after a quick research. Some general notes / questions: Testing is important Solution with a clear domain model would be much preferred over the one without Can Entity Framework actually help somewhere in my scenario? If so, where and how? Would WCF be an appropriate technology for this? I don't know much about it.

    Read the article

  • In Java it seems Public constructors are always a bad coding practice

    - by Adam Gent
    This maybe a controversial question and may not be suited for this forum (so I will not be insulted if you choose to close this question). It seems given the current capabilities of Java there is no reason to make constructors public ... ever. Friendly, private, protected are OK but public no. It seems that its almost always a better idea to provide a public static method for creating objects. Every Java Bean serialization technology (JAXB, Jackson, Spring etc...) can call a protected or private no-arg constructor. My questions are: I have never seen this practice decreed or written down anywhere? Maybe Bloch mentions it but I don't own is book. Is there a use case other than perhaps not being super DRY that I missed? EDIT: I explain why static methods are better. .1. For one you get better type inference. For example See Guava's http://code.google.com/p/guava-libraries/wiki/CollectionUtilitiesExplained .2. As a designer of the class you can later change what is returned with a static method. .3. Dealing with constructor inheritance is painful especially if you have to pre-calculate something.

    Read the article

  • Remove then Query fails in JPA (deleted entity passed to persist)

    - by nag
    I have two entitys MobeeCustomer and CustomerRegion i want to remove the object from CustomerRegion first Im put join Coloumn in CustomerRegion is null then Remove the Object from the entityManager but Iam getting Exception MobeeCustomer: public class MobeeCustomer implements Serialization{ private Long id; private String custName; private String Address; private String phoneNo; private Set<CustomerRegion> customerRegion = new HashSet<CustomerRegion>(0); @OneToMany(cascade = { CascadeType.PERSIST, CascadeType.REMOVE }, fetch = FetchType.LAZY, mappedBy = "mobeeCustomer") public Set<CustomerRegion> getCustomerRegion() { return CustomerRegion; } public void setCustomerRegion(Set<CustomerRegion> customerRegion) { CustomerRegion = customerRegion; } } CustomerRegion public class CustomerRegion implements Serializable{ private Long id; private String custName; private String description; private String createdBy; private Date createdOn; private String updatedBy; private Date updatedOn; private MobeeCustomer mobeeCustomer; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "MOBEE_CUSTOMER") public MobeeCustomer getMobeeCustomer() { return mobeeCustomer; } public void setMobeeCustomer(MobeeCustomer mobeeCustomer) { this.mobeeCustomer = mobeeCustomer; } } sample code: for (CustomerRegion region : deletedRegionList) { region.setMobeeCustomer(null); getEntityManager().remove(region); } StackTrace: please suggest me how to remove the CustomerRegion Object I am getting Exception javax.persistence.EntityNotFoundException: deleted entity passed to persist: [com.manam.mobee.persist.entity.CustomerRegion#<null>] 15:46:34,614 ERROR [STDERR] at org.hibernate.ejb.AbstractEntityManagerImpl.throwPersistenceException(AbstractEntityManagerImpl.java:613) 15:46:34,614 ERROR [STDERR] at org.hibernate.ejb.AbstractEntityManagerImpl.flush(AbstractEntityManagerImpl.java:299) 15:46:34,614 ERROR [STDERR] at org.jboss.seam.persistence.EntityManagerProxy.flush(EntityManagerProxy.java:92) 15:46:34,614 ERROR [STDERR] at org.jboss.seam.framework.EntityHome.update(EntityHome.java:64)

    Read the article

  • Lucene document Boosting

    - by athreyar
    Hello, I am having problem with lucene boosting, Iam trying to boost a particular document which matches with the (firstname)field specified I have posted the part of the codeenter code hereprivate static Document createDoc(String lucDescription,String primaryk,String specialString){ Document doc = new Document(); doc.add(new Field("lucDescription",lucDescription, Field.Store.NO, Field.Index.TOKENIZED)); doc.add(new Field("primarykey",primaryk,Field.Store.YES,Field.Index.NO)); doc.add(new Field("specialDescription",specialString, Field.Store.NO, Field.Index.UN_TOKENIZED)); doc.setBoost ((float)(0.00001)); if (specialString.equals("chris")) doc.setBoost ((float)(100000.1)); return doc; } why is this not working?enter code herepublic static String dbSearch(String searchString){ List pkList = new ArrayList(); String conCat="("; try{ String querystr = searchString; Query query = new QueryParser("lucDescription", new StandardAnalyzer()).parse(querystr); IndexSearcher searchIndex = new IndexSearcher("/home/athreya/docsIndexFile"); // Index of the User table-- /home/araghu/aditya/indexFile. Hits hits = searchIndex.search(query); System.out.println("Found " + hits.length() + " hits."); for(int iterator=0;iterator Thank you in advance Athreya

    Read the article

  • What's a good Java-based Master-Slave communication mechanism?

    - by plecong
    I'm creating a Java application that requires master-slave communication between JVMs, possibly residing on the same physical machine. There will be a "master" server running inside a JEE application server (i.e. JBoss) that will have "slave" clients connect to it and dynamically register itself for communication (that is the master will not know the IP addresses/ports of the slaves so cannot be configured in advance). The master server acts as a controller that will dole work out to the slaves and the slaves will periodically respond with notifications, so there would be bi-directional communication. I was originally thinking of RPC-based systems where each side would be a server, but it could get complicated, so I'd prefer a mechanism where there's an open socket and they talk back and forth. I'm looking for a communication mechanism that would be low-latency where the messages would be mostly primitive types, so no serious serialization is necessary. Here's what I've looked at: RMI JMS: Built-in to Java, the "slave" clients would connect to the existing ConnectionFactory in the application server. JAX-WS/RS: Both master and slave would be servers exposing an RPC interface for bi-directional communication. JGroups/Hazelcast: Use shared distributed data structures to facilitate communication. Memcached/MongoDB: Use these as "queues" to facilitate communication, though the clients would have to poll so there would be some latency. Thrift: This does seem to keep a persistent connection, but not sure how to integrate/embed a Thrift server into JBoss WebSocket/Raw Socket: This would work, but require a lot more custom code than I'd like. Is there any technology I'm missing? Edit: Also looked at: JMX: Have the client connect to JBoss' JMX server and receive JMX notifications for bidirectional comms.

    Read the article

  • Working with complex objects in Prevayler commands

    - by alexantd
    The demos included in the Prevayler distribution show how to pass in a couple strings (or something simple like that) into a command constructor in order to create or update an object. The problem is that I have an object called MyObject that has a lot of fields. If I had to pass all of them into the CreateMyObject command manually, it would be a pain. So an alternative I thought of is to pass my business object itself into the command, but to hang onto a clone of it (keeping in mind that I can't store the BO directly in the command). Of course after executing this command, I would need to make sure to dispose of the original copy that I passed in. public class CreateMyObject implements TransactionWithQuery { private MyObject object; public CreateMyObject(MyObject business_obj) { this.object = (MyObject) business_obj.clone(); } public Object executeAndQuery(...) throws Exception { ... } } The Prevayler wiki says: Transactions can't carry direct object references (pointers) to business objects. This has become known as the baptism problem because it's a common beginner pitfall. Direct object references don't work because once a transaction has been serialized to the journal and then deserialized for execution its object references no longer refer to the intended objects - - any objects they may have referred to at first will have been copied by the serialization process! Therefore, a transaction must carry some kind of string or numeric identifiers for any objects it wants to refer to, and it must look up the objects when it is executed. I think by cloning the passed-in object I will be getting around the "direct object pointer" problem, but I still don't know whether or not this is a good idea...

    Read the article

  • Why do bind1st and bind2nd require constant function objects?

    - by rlbond
    So, I was writing a C++ program which would allow me to take control of the entire world. I was all done writing the final translation unit, but I got an error: error C3848: expression having type 'const `anonymous-namespace'::ElementAccumulator<T,BinaryFunction>' would lose some const-volatile qualifiers in order to call 'void `anonymous-namespace'::ElementAccumulator<T,BinaryFunction>::operator ()(const point::Point &,const int &)' with [ T=SideCounter, BinaryFunction=std::plus<int> ] c:\program files (x86)\microsoft visual studio 9.0\vc\include\functional(324) : while compiling class template member function 'void std::binder2nd<_Fn2>::operator ()(point::Point &) const' with [ _Fn2=`anonymous-namespace'::ElementAccumulator<SideCounter,std::plus<int>> ] c:\users\****\documents\visual studio 2008\projects\TAKE_OVER_THE_WORLD\grid_divider.cpp(361) : see reference to class template instantiation 'std::binder2nd<_Fn2>' being compiled with [ _Fn2=`anonymous-namespace'::ElementAccumulator<SideCounter,std::plus<int>> ] I looked in the specifications of binder2nd and there it was: it took a const AdaptibleBinaryFunction. So, not a big deal, I thought. I just used boost::bind instead, right? Wrong! Now my take-over-the-world program takes too long to compile (bind is used inside a template which is instantiated quite a lot)! At this rate, my nemesis is going to take over the world first! I can't let that happen -- he uses Java! So can someone tell me why this design decision was made? It seems like an odd decision. I guess I'll have to make some of the elements of my class mutable for now...

    Read the article

  • how to implement a sparse_vector class

    - by Neil G
    I am implementing a templated sparse_vector class. It's like a vector, but it only stores elements that are different from their default constructed value. So, sparse_vector would store the index-value pairs for all indices whose value is not T(). I am basing my implementation on existing sparse vectors in numeric libraries-- though mine will handle non-numeric types T as well. I looked at boost::numeric::ublas::coordinate_vector and eigen::SparseVector. Both store: size_t* indices_; // a dynamic array T* values_; // a dynamic array int size_; int capacity_; Why don't they simply use vector<pair<size_t, T>> data_; My main question is what are the pros and cons of both systems, and which is ultimately better? The vector of pairs manages size_ and capacity_ for you, and simplifies the accompanying iterator classes; it also has one memory block instead of two, so it incurs half the reallocations, and might have better locality of reference. The other solution might search more quickly since the cache lines fill up with only index data during a search. There might also be some alignment advantages if T is an 8-byte type? It seems to me that vector of pairs is the better solution, yet both containers chose the other solution. Why?

    Read the article

  • Truncate C++ string fields generated by ostringstream, iomanip:setw

    - by Ian Durkan
    In C++ I need string representations of integers with leading zeroes, where the representation has 8 digits and no more than 8 digits, truncating digits on the right side if necessary. I thought I could do this using just ostringstream and iomanip.setw(), like this: int num_1 = 3000; ostringstream out_target; out_target << setw(8) << setfill('0') << num_1; cout << "field: " << out_target.str() << " vs input: " << num_1 << endl; The output here is: field: 00003000 vs input: 3000 Very nice! However if I try a bigger number, setw lets the output grow beyond 8 characters: int num_2 = 2000000000; ostringstream out_target; out_target << setw(8) << setfill('0') << num_2; cout << "field: " << out_target.str() << " vs input: " << num_2 << endl; out_target.str(""); output: field: 2000000000 vs input: 2000000000 The desired output is "20000000". There's nothing stopping me from using a second operation to take only the first 8 characters, but is field truncation truly missing from iomanip? Would the Boost formatting do what I need in one step?

    Read the article

  • Mmap and structure

    - by blid..pl
    I'm working some code including communication between processes, using semaphores. I made structure like this: typedef struct container { sem_t resource, mutex; int counter; } container; and use in that way (in main app and the same in subordinate processes) container *memory; shm_unlink("MYSHM"); //just in case fd = shm_open("MYSHM", O_RDWR|O_CREAT|O_EXCL, 0); if(fd == -1) { printf("Error"); exit(EXIT_FAILURE); } memory = mmap(NULL, sizeof(container), PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0); ftruncate(fd, sizeof(container)); Everything is fine when I use one of the sem_ functions, but when I try to do something like memory->counter = 5; It doesn't work. Probably I got something wrong with pointers, but I tried almost everything and nothing seems to work. Maybe there's a better way to share variables, structures etc between processes ? Unfortunately I'm not allowed to use boost or something similiar, the code is for educational purposes and I'm intentend to keep as simple as it's possible.

    Read the article

  • What can cause my code to run slower when the server JIT is activated?

    - by durandai
    I am doing some optimizations on an MPEG decoder. To ensure my optimizations aren't breaking anything I have a test suite that benchmarks the entire codebase (both optimized and original) as well as verifying that they both produce identical results (basically just feeding a couple of different streams through the decoder and crc32 the outputs). When using the "-server" option with the Sun 1.6.0_18, the test suite runs about 12% slower on the optimized version after warmup (in comparison to the default "-client" setting), while the original codebase gains a good boost running about twice as fast as in client mode. While at first this seemed to be simply a warmup issue to me, I added a loop to repeat the entire test suite multiple times. Then execution times become constant for each pass starting at the 3rd iteration of the test, still the optimized version stays 12% slower than in the client mode. I am also pretty sure its not a garbage collection issue, since the code involves absolutely no object allocations after startup. The code consists mainly of some bit manipulation operations (stream decoding) and lots of basic floating math (generating PCM audio). The only JDK classes involved are ByteArrayInputStream (feeds the stream to the test and excluding disk IO from the tests) and CRC32 (to verify the result). I also observed the same behaviour with Sun JDK 1.7.0_b98 (only that ist 15% instead of 12% there). Oh, and the tests were all done on the same machine (single core) with no other applications running (WinXP). While there is some inevitable variation on the measured execution times (using System.nanoTime btw), the variation between different test runs with the same settings never exceeded 2%, usually less than 1% (after warmup), so I conclude the effect is real and not purely induced by the measuring mechanism/machine. Are there any known coding patterns that perform worse on the server JIT? Failing that, what options are available to "peek" under the hood and observe what the JIT is doing there?

    Read the article

  • Parse string to create a list of element

    - by Nick
    I have a string like this: "\r color=\"red\" name=\"Jon\" \t\n depth=\"8.26\" " And I want to parse this string and create a std::list of this object: class data { std::string name; std::string value; }; Where for example: name = color value = red What is the fastest way? I can use boost. EDIT: This is what i've tried: vector<string> tokens; split(tokens, str, is_any_of(" \t\f\v\n\r")); if(tokens.size() > 1) { list<data> attr; for_each(tokens.begin(), tokens.end(), [&attr](const string& token) { if(token.empty() || !contains(token, "=")) return; vector<string> tokens; split(tokens, token, is_any_of("=")); erase_all(tokens[1], "\""); attr.push_back(data(tokens[0], tokens[1])); } ); } But it does not work if there are spaces inside " ": like color="red 1".

    Read the article

  • ASMX schema varies when using WCF Service

    - by Lijo
    Hi, I have a client (created using ASMX "Add Web Reference"). The service is WCF. The signature of the methods varies for the client and the Service. I get some unwanted parameteres to the method. Note: I have used IsRequired = true for DataMember. Service: [OperationContract] int GetInt(); Client: proxy.GetInt(out requiredResult, out resultBool); Could you please help me to make the schame non-varying in both WCF clinet and non-WCF cliet? Do we have any best practices for that? using System.ServiceModel; using System.Runtime.Serialization; namespace SimpleLibraryService { [ServiceContract(Namespace = "http://Lijo.Samples")] public interface IElementaryService { [OperationContract] int GetInt(); [OperationContract] int SecondTestInt(); } public class NameDecorator : IElementaryService { [DataMember(IsRequired=true)] int resultIntVal = 1; int firstVal = 1; public int GetInt() { return firstVal; } public int SecondTestInt() { return resultIntVal; } } } Binding = "basicHttpBinding" using NonWCFClient.WebServiceTEST; namespace NonWCFClient { class Program { static void Main(string[] args) { NonWCFClient.WebServiceTEST.NameDecorator proxy = new NameDecorator(); int requiredResult =0; bool resultBool = false; proxy.GetInt(out requiredResult, out resultBool); Console.WriteLine("GetInt___"+requiredResult.ToString() +"__" + resultBool.ToString()); int secondResult =0; bool secondBool = false; proxy.SecondTestInt(out secondResult, out secondBool); Console.WriteLine("SecondTestInt___" + secondResult.ToString() + "__" + secondBool.ToString()); Console.ReadLine(); } } } Please help.. Thanks Lijo

    Read the article

  • So many technologies to choose from. Where does the beginner start?

    - by Sahat
    WPF Silverlight Windows phone 7 w/ Silverlight iPhone OS w/ Objective-C Cocoa w/ Objective-C ASP.NET Android Facebook FBML HTML5 I will be graduating with B.S. in Computer Science soon and have to decide what do I want to learn from this list. I believe it's better to focus on one thing, master it and build up a portfolio to enhance my resume. Bachelor's Degree with no experience, no portfolio won't do me any good. It won't get me a job by itself. I need to have something that will greatly boost my resume. What would it be? iPhone development? ASP.NET web development? Facebook development? Or completely something else that I haven't listed? I understand it's natural for silverlight developers to say "Learn Silverlight", and iPhone developers say "Learn iPhone SDK and Objective-C". So please try to give a constructive, non-biased, non-objective opinion on which technology should I focus on. Please don't close the topic for "subjective/argumentative" reasons. I am just looking for some guidance.

    Read the article

  • SQL Server 2008 spatial index and CPU utilization with MapGuide Open Source 2.1

    - by Antonio de la Peña
    I have a SQL Server table with hundreds of thousands of geometry type parcels. I have made indexes on them trying different combinations of density and objects per cell settings. So far I'm settiling for LOW, LOW, MEDIUM, MEDIUM and 16 objects per cell and I made a SP that sets the bounding box according to the extents of the entities in the table. There is an incredible performance boost from queries taking almost minutes without index to less than seconds, it gets faster when the zoom is closer thus less objects are displayed. Yet the CPU utilization gets to 100% when querying for features, even when the queries themselves are fast. I'm worrying this will not fly in a production environment. I am using MapGuide Open Source 2.1 for this project, but I am positive the CPU load is caused by SQL Server. I wonder if my indexes are set properly. I haven't found any clear documentation on how to properly set them up. Every article I've read basically says "it depends..." but nothing specific. Do you have any recommendations for me, including books, articles? Thank you.

    Read the article

  • Cross-Platform Camera API

    - by Karim
    Hi, I'm now building a video transforming filter that have to transform video frames in real-time. One of the key requirements of the filter is to have high performance to minimize the number of dropped frames during the transform. Another requirement that is of lower priority but also nice to have is to make it cross-platform (both PC's and Mobile devices). The application is built in C++. Now my question is: is there any API that is more portable and has a similar or better performance characteristics than DirectShow? as DirectShow's portability is only limited to Windows-based devices (PCs and Windows Mobile&CE platforms). Also I've notices that for example using HTC's custom camera API has far better performance than what DirectShow offers. If you want to check this, try to build a filter in DirectShow that will multiply each color by 2 and render that in real-time from camera on the screen. Then do the same with HTC's API. There is almost 4-5x performance boost with vendor's specific API. So it'd be very nice if the library used the device-specific implementation of the driver, as performance is critical when doing this transforms on a mobile device (which is about ~500 MHz).

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >