Search Results

Search found 3077 results on 124 pages for 'boost serialization'.

Page 108/124 | < Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >

  • Memcache key generation strategy

    - by Maxim Veksler
    Given function f1 which receives n String arguments, would be considered better random key generation strategy for memcache for the scenario described below ? Our Memcache client does internal md5sum hashing on the keys it gets public class MemcacheClient { public Object get(String key) { String md5 = Md5sum.md5(key) // Talk to memcached to get the Serialization... return memcached(md5); } } First option public static String f1(String s1, String s2, String s3, String s4) { String key = s1 + s2 + s3 + s4; return get(key); } Second option /** * Calculate hash from Strings * * @param objects vararg list of String's * * @return calculated md5sum hash */ public static String stringHash(Object... strings) { if(strings == null) throw new NullPointerException("D'oh! Can't calculate hash for null"); MD5 md5sum = new MD5(); // if(prevHash != null) // md5sum.Update(prevHash); for(int i = 0; i < strings.length; i++) { if(strings[i] != null) { md5sum.Update("_" + strings[i] + "_"); // Convert to String... } else { // If object is null, allow minimum entropy by hashing it's position md5sum.Update("_" + i + "_"); } } return md5sum.asHex(); } public static String f1(String s1, String s2, String s3, String s4) { String key = stringHash(s1, s2, s3, s4); return get(key); } Note that the possible problem with the second option is that we are doing second md5sum (in the memcache client) on an already md5sum'ed digest result. Thanks for reading, Maxim.

    Read the article

  • Is Annotation in Javascript? If not, how to switch between debug/productive modes in declarative way

    - by Michael Mao
    Hi all: This is but a curious question. I cannot find any useful links from Google so it might be better to ask the gurus here. The point is: is there a way to make "annotation" in javascript source code so that all code snippets for testing purpose can be 'filtered out' when project is deployed from test field into the real environment? I know in Java, C# or some other languages, you can assign an annotation just above the function name, such as : // it is good to remove the annoying warning messages @SuppressWarnings("unchecked") public class Tester extends TestingPackage { ... } Basically I've got a lot of testing code that prints out something into FireBug console. I don't wanna manually "comment out" them because the guy that is going to maintain the code might not be aware of all the testing functions, so he/she might just miss one function and the whole thing can be brought down to its knees. One other thing, we might use a minimizer to "shrink" the source code into "human unreadable" code and boost up performance (just like jQuery.min), so trying to match testing section out of the mess is not possible for plain human eyes in the future. Any suggestion is much appreciated.

    Read the article

  • Passenger error message I can't figure out

    - by Sam Kong
    Hi, I am testing Rails 3 on DreamHost which just installed Rails 3. I created a simple controller and it failed. Browser shows 500 error (Internal Server Error) and the log shows the following message. Could not find i18n-0.5.0 in any of the sources Try running `bundle install`. *** Exception EOFError in spawn manager (Unexpected end-of-file detected.) (process 17951): from /dh/passenger/lib/phusion_passenger/utils.rb:306:in `unmarshal_and_raise_errors' from /dh/passenger/lib/phusion_passenger/rack/application_spawner.rb:71:in `spawn_application' from /dh/passenger/lib/phusion_passenger/rack/application_spawner.rb:41:in `spawn_application' from /dh/passenger/lib/phusion_passenger/spawn_manager.rb:159:in `spawn_application' from /dh/passenger/lib/phusion_passenger/spawn_manager.rb:287:in `handle_spawn_application' from /dh/passenger/lib/phusion_passenger/abstract_server.rb:352:in `__send__' from /dh/passenger/lib/phusion_passenger/abstract_server.rb:352:in `main_loop' from /dh/passenger/lib/phusion_passenger/abstract_server.rb:196:in `start_synchronously' from /dh/passenger/bin/passenger-spawn-server:61 [ pid=13245 file=ext/apache2/Hooks.cpp:727 time=2010-12-24 12:13:38.287 ]: Unexpected error in mod_passenger: Cannot spawn application '/home/cp_rails3/sites/rails3.codepremise.com': The spawn server has exited unexpectedly. Backtrace: in 'virtual boost::shared_ptr<Passenger::Application::Session> Passenger::ApplicationPoolServer::Client::get(const Passenger::PoolOptions&)' (ApplicationPoolServer.h:471) in 'int Hooks::handleRequest(request_rec*)' (Hooks.cpp:523) It runs fine in console (app.get "url") and also ok with "rails server". What's wrong? Thanks. Sam

    Read the article

  • Seam unit test can't connect to JBoss4.0.5GA

    - by user240423
    Does anybody know how to connect with JBoss4.0.5GA in Seam2.2.0GA unit test? I'm using Seam2.2.0GA with the embeded JBoss to run the unit test, the module needs to call old JBoss server (EJB2, because the vendor locked in JCA has to deploy on old JBoss). it's typical seam test case, and I can't get it connect to JBoss4.0.5GA by using jbossall-client.jar from JBoss4.0.5GA. so far I tested the embeded JBoss can only working with JBoss5.X server. with JBoss4.2.3GA, it report: [testng] java.rmi.MarshalException: Failed to communicate. Problem during marshalling/unmarshalling; nested exception is: [testng] java.net.SocketException: end of file [testng] at org.jboss.remoting.transport.socket.SocketClientInvoker.handleException(SocketClientInvoker.java:122) [testng] at org.jboss.remoting.transport.socket.MicroSocketClientInvoker.transport(MicroSocketClientInvoker.java:679) [testng] at org.jboss.remoting.MicroRemoteClientInvoker.invoke(MicroRemoteClientInvoker.java:122) [testng] at org.jboss.remoting.Client.invoke(Client.java:1634) [testng] at org.jboss.remoting.Client.invoke(Client.java:548) This is the best result I could get (JBoss5.1.0GA jbossall-client.jar call JBoss4.2.3GA server in the embeded JBoss env), here is the ant script: <testng classpathref="build.test.classpath" outputDir="${target.test-reports.dir}" haltOnfailure="true"> <jvmarg line="-Dsun.lang.ClassLoader.allowArraySyntax=true" /> <jvmarg line="-Dorg.jboss.j2ee.Serialization=true" /> <!-- <jvmarg line="-Djboss.remoting.pre_2_0_compatible=true" /> --> <jvmarg line="-Djboss.jndi.url=jnp://localhost:1099" /> <xmlfileset dir="src/test/java" includes="${ant.project.name}-testsuite.xml" /> </testng> Can anybody help?

    Read the article

  • Why do I have to unserialize this serialized value twice?

    - by 55skidoo
    What am I doing wrong here? I'm serializing a value, storing it in a database (table bb_meta), retrieving it... OK so far... but then I have to unserialize it twice. Shouldn't I be able to just unserialize once? This seems to work, but I'm wondering what point about serialization I'm missing here. //check database to see if user has ever visited before. $querystring = $bbdb->prepare( "SELECT `meta_value` FROM `$bbdb->meta` WHERE `object_type` = %s AND `object_id` = %s AND `meta_key` = %s LIMIT 1", $bbtype, $bb_this_thread, $bbuser ); $bb_last_visits = $bbdb->get_row($querystring, OBJECT); //if $bb_last_visits is empty, add time() as the metavalue using bb_update_meta if (empty($bb_last_visits)) { $first_visit = time(); echo 'serialized first visit: ' . $bb_this_visit_time_serialized = serialize(array($bb_this_thread => $first_visit)); bb_update_meta( $bb_this_thread, $bbuser, $bb_this_visit_time_serialized, $bbtype ); //add to database, bb_meta table echo '$bb_last_visits was empty. Setting first visit time as ' . $bb_this_visit_time_serialized . '<br>'; } else { //else, test by unserializing the data for use. echo 'last visit time already set: '; echo $bb_last_visits->meta_value; echo '<br>'; //fatal error - echo 'unserialized: ' . $bb_last_visits_unserialized = unserialize($bb_last_visits[0]->meta_value); echo '<br>'; echo 'unserialize: ' . $unserialized_visits = unserialize($bb_last_visits->meta_value); echo '<br>'; echo 'hmm, need to unserialize again??: '; echo $unserialized_unserialized_visits = unserialize($unserialized_visits); echo '<br>'; echo 'hey look, it\'s an array value I can finally use now. phew: ' . $unserialized_unserialized_visits[$bb_this_thread]; }

    Read the article

  • Why do I have to unserialize this serialized value twice? (wordpress/bbpress maybe_serialize)

    - by 55skidoo
    What am I doing wrong here? I'm serializing a value, storing it in a database (table bb_meta), retrieving it... OK so far... but then I have to unserialize it twice. Shouldn't I be able to just unserialize once? This seems to work, but I'm wondering what point about serialization I'm missing here. //check database to see if user has ever visited before. $querystring = $bbdb->prepare( "SELECT `meta_value` FROM `$bbdb->meta` WHERE `object_type` = %s AND `object_id` = %s AND `meta_key` = %s LIMIT 1", $bbtype, $bb_this_thread, $bbuser ); $bb_last_visits = $bbdb->get_row($querystring, OBJECT); //if $bb_last_visits is empty, add time() as the metavalue using bb_update_meta if (empty($bb_last_visits)) { $first_visit = time(); echo 'serialized first visit: ' . $bb_this_visit_time_serialized = serialize(array($bb_this_thread => $first_visit)); bb_update_meta( $bb_this_thread, $bbuser, $bb_this_visit_time_serialized, $bbtype ); //add to database, bb_meta table echo '$bb_last_visits was empty. Setting first visit time as ' . $bb_this_visit_time_serialized . '<br>'; } else { //else, test by unserializing the data for use. echo 'last visit time already set: '; echo $bb_last_visits->meta_value; echo '<br>'; //fatal error - echo 'unserialized: ' . $bb_last_visits_unserialized = unserialize($bb_last_visits[0]->meta_value); echo '<br>'; echo 'unserialize: ' . $unserialized_visits = unserialize($bb_last_visits->meta_value); echo '<br>'; echo 'hmm, need to unserialize again??: '; echo $unserialized_unserialized_visits = unserialize($unserialized_visits); echo '<br>'; echo 'hey look, it\'s an array value I can finally use now. phew: ' . $unserialized_unserialized_visits[$bb_this_thread]; }

    Read the article

  • My First F# program

    - by sudaly
    Hi I just finish writing my first F# program. Functionality wise the code works the way I wanted, but not sure if the code is efficient. I would much appreciate if someone could review the code for me and point out the areas where the code can be improved. Thanks Sudaly open System open System.IO open System.IO.Pipes open System.Text open System.Collections.Generic open System.Runtime.Serialization [<DataContract>] type Quote = { [<field: DataMember(Name="securityIdentifier") >] RicCode:string [<field: DataMember(Name="madeOn") >] MadeOn:DateTime [<field: DataMember(Name="closePrice") >] Price:float } let m_cache = new Dictionary<string, Quote>() let ParseQuoteString (quoteString:string) = let data = Encoding.Unicode.GetBytes(quoteString) let stream = new MemoryStream() stream.Write(data, 0, data.Length); stream.Position <- 0L let ser = Json.DataContractJsonSerializer(typeof<Quote array>) let results:Quote array = ser.ReadObject(stream) :?> Quote array results let RefreshCache quoteList = m_cache.Clear() quoteList |> Array.iter(fun result->m_cache.Add(result.RicCode, result)) let EstablishConnection() = let pipeServer = new NamedPipeServerStream("testpipe", PipeDirection.InOut, 4) let mutable sr = null printfn "[F#] NamedPipeServerStream thread created, Wait for a client to connect" pipeServer.WaitForConnection() printfn "[F#] Client connected." try // Stream for the request. sr <- new StreamReader(pipeServer) with | _ as e -> printfn "[F#]ERROR: %s" e.Message sr while true do let sr = EstablishConnection() // Read request from the stream. printfn "[F#] Ready to Receive data" sr.ReadLine() |> ParseQuoteString |> RefreshCache printfn "[F#]Quot Size, %d" m_cache.Count let quot = m_cache.["MSFT.OQ"] printfn "[F#]RIC: %s" quot.RicCode printfn "[F#]MadeOn: %s" (String.Format("{0:T}",quot.MadeOn)) printfn "[F#]Price: %f" quot.Price

    Read the article

  • Why This Maintainability Index Increase?

    - by Timothy
    I would be appreciative if someone could explain to me the difference between the following two pieces of code in terms of Visual Studio's Code Metrics rules. Why does the Maintainability Index increase slightly if I don't encapsulate everything within using ( )? Sample 1 (MI score of 71) public static String Sha1(String plainText) { using (SHA1Managed sha1 = new SHA1Managed()) { Byte[] text = Encoding.Unicode.GetBytes(plainText); Byte[] hashBytes = sha1.ComputeHash(text); return Convert.ToBase64String(hashBytes); } } Sample 2 (MI score of 73) public static String Sha1(String plainText) { Byte[] text, hashBytes; using (SHA1Managed sha1 = new SHA1Managed()) { text = Encoding.Unicode.GetBytes(plainText); hashBytes = sha1.ComputeHash(text); } return Convert.ToBase64String(hashBytes); } I understand metrics are meaningless outside of a broader context and understanding, and programmers should exercise discretion. While I could boost the score up to 76 with return Convert.ToBase64String(sha1.ComputeHash(Encoding.Unicode.GetBytes(plainText))), I shouldn't. I would clearly be just playing with numbers and it isn't truly any more readable or maintainable at that point. I am curious though as to what the logic might be behind the increase in this case. It's obviously not line-count.

    Read the article

  • Find all cycles in graph, redux

    - by Shadow
    Hi, I know there are a quite some answers existing on this question. However, I found none of them really bringing it to the point. Some argue that a cycle is (almost) the same as a strongly connected components (s. http://stackoverflow.com/questions/546655/finding-all-cycles-in-graph/549402#549402) , so one could use algorithms designed for that goal. Some argue that finding a cycle can be done via DFS and checking for back-edges (s. boost graph documentation on file dependencies). I now would like to have some suggestions on whether all cycles in a graph can be detected via DFS and checking for back-edges? My opinion is that it indeed could work that way as DFS-VISIT (s. pseudocode of DFS) freshly enters each node that was not yet visited. In that sense, each vertex exhibits a potential start of a cycle. Additionally, as DFS visits each edge once, each edge leading to the starting point of a cycle is also covered. Thus, by using DFS and back-edge checking it should indeed be possible to detect all cycles in a graph. Note that, if cycles with different numbers of participant nodes exist (e.g. triangles, rectangles etc.), additional work has to be done to discriminate the acutal "shape" of each cycle.

    Read the article

  • Deserialize complex JSON (VB.NET)

    - by Ssstefan
    I'm trying to deserialize json returned by some directions API similar to Google Maps API. My JSON is as follows (I'm using VB.NET 2008): jsontext = { "version":0.3, "status":0, "route_summary": { "total_distance":300, "total_time":14, "start_point":"43", "end_point":"42" }, "route_geometry":[[51.025421,18.647631],[51.026131,18.6471],[51.027802,18.645639]], "route_instructions": [["Head northwest on 43",88,0,4,"88 m","NW",334.8],["Continue on 42",212,1,10,"0.2 km","NW",331.1,"C",356.3]] } So far I came up with the following code: Dim js As New System.Web.Script.Serialization.JavaScriptSerializer Dim lstTextAreas As Output_CloudMade() = js.Deserialize(Of Output_CloudMade())(jsontext) I'm not sure how to define complex class, i.e. Output_CloudMade. I'm trying something like: Public Class RouteSummary Private mTotalDist As Long Private mTotalTime As Long Private mStartPoint As String Private mEndPoint As String Public Property TotalDist() As Long Get Return mTotalDist End Get Set(ByVal value As Long) mTotalDist = value End Set End Property Public Property TotalTime() As Long Get Return mTotalTime End Get Set(ByVal value As Long) mTotalTime = value End Set End Property Public Property StartPoint() As String Get Return mStartPoint End Get Set(ByVal value As String) mStartPoint = value End Set End Property Public Property EndPoint() As String Get Return mEndPoint End Get Set(ByVal value As String) mEndPoint = value End Set End Property End Class Public Class Output_CloudMade Private mVersion As Double Private mStatus As Long Private mRSummary As RouteSummary 'Private mRGeometry As RouteGeometry 'Private mRInstructions As RouteInstructions Public Property Version() As Double Get Return mVersion End Get Set(ByVal value As Double) mVersion = value End Set End Property Public Property Status() As Long Get Return mStatus End Get Set(ByVal value As Long) mStatus = value End Set End Property Public Property Summary() As RouteSummary Get Return mRSummary End Get Set(ByVal value As RouteSummary) mRSummary = value End Set End Property 'Public Property Geometry() As String ' Get ' End Get ' Set(ByVal value As String) ' End Set 'End Property 'Public Property Instructions() As String ' Get ' End Get ' Set(ByVal value As String) ' End Set 'End Property End Class but it does not work. The problem is with complex properties, like route_summary. It is filled with "nothing". Other properties, like "status" or "version" are filled properly. Any ideas, how to define class for the above JSON? Can you share some working code for deserializing JSON in VB.NET? Thanks,

    Read the article

  • SQL with Regular Expressions vs Indexes with Logical Merging Functions

    - by geeko
    Hello Lads, I am trying to develop a complex textual search engine. I have thousands of textual pages from many books. I need to search pages that contain specified complex logical criterias. These criterias can contain virtually any compination of the following: A: Full words. B: Word roots (semilar to stems; i.e. all words with certain key letters). C: Word templates (in some languages are filled in certain templates to form various part of speech such as adjactives, past/present verbs...). D: Logical connectives: AND/OR/XOR/NOT/IF/IFF and parentheses to state priorities. Now, would it be faster to have the pages' full text in database (not indexed) and search though them all using SQL and Regular Expressions ? Or would it be better to construct indexes of word/root/template-page-location tuples. Hence, we can boost searching for individual words/roots/templates. However, it gets tricky as we interdouce logical connectives into our query. I thought of doing the following steps in such cases: 1: Seperately search for each individual words/roots/templates in the specified query. 2: On priority bases, we merge two result lists (from step 1) at a time depedning on the logical connective For example, if we are searching for "he AND (is OR was)": 1: We shall search for "he", "is" and "was" seperately and get result lists for each word. 2: Merge the result lists of "is" and "was" using the merging function OR-MERGE 3: Merge the merged result list from the OR-MERGE function with the one of "he" using the merging function AND-MERGE The result of step 3 is then returned as the result of the specified query. What do you think gurues ? Which is faster ? Any better ideas ? Thank you all in advance.

    Read the article

  • Fluent NHibernate mapping List<Point> as value to single column

    - by Paja
    I have this class: public class MyEntity { public virtual int Id { get; set; } public virtual List<Point> Vectors { get; set; } } How can I map the Vectors in Fluent NHibernate to a single column (as value)? I was thinking of this: public class Vectors : ISerializable { public List<Point> Vectors { get; set; } /* Here goes ISerializable implementation */ } public class MyEntity { public virtual int Id { get; set; } public virtual Vectors Vectors { get; set; } } Is it possible to map the Vectors like this, hoping that Fluent NHibernate will initialize Vectors class as standard ISerializable? Or how else could I map List<Point> to a single column? I guess I will have to write the serialization/deserialization routines myself, which is not a problem, I just need to tell FNH to use those routines. I guess I should use IUserType or ICompositeUserType, but I have no idea how to implement them, and how to tell FNH to cooperate.

    Read the article

  • What is the best software design to use in this scenario

    - by domdefelice
    I need to generate HTML snippets using jQuery. The creation of those snippets depends on some data. The data is stored server-side, in session (where PHP is used). At the moment I achieved this - retrieving the data from the server via AJAX in form of JSON - and building the snippets via specific javascript functions that read those data The problem is that the complexity of the data is getting bigger and hence the serialization into JSON is getting even more difficult since I can't do it automatically. I can't do it automatically because some information are sensible so I generate a "stripped" version to send to the client. I know it is difficult to understand without any code to read, but I am hoping this is a common scenario and would be glad for any tip, suggestion or even design-pattern you can give me. Should I store both a complete and a stripped data on the server and then use some library to automatically generate the JSON from the stripped data? But this also means I have to get the two data synchronized. Or maybe I could move the logic server-side, this way avoiding sending the data. But this means sending javascript code (since I rely on jQuery). Maybe not a good idea. Feel free to ask me more details if this is not clear. Thank you for any help

    Read the article

  • Looking for advice on importing large dataset in sqlite and Cocoa/Objective-C

    - by jluckyiv
    I have a fairly large hierarchical dataset I'm importing. The total size of the database after import is about 270MB in sqlite. My current method works, but I know I'm hogging memory as I do it. For instance, if I run with Zombies, my system freezes up (although it will execute just fine if I don't use that Instrument). I was hoping for some algorithm advice. I have three hierarchical tables comprising about 400,000 records. The highest level has about 30 records, the next has about 20,000, the last has the balance. Right now, I'm using nested for loops to import. I know I'm creating an unreasonably large object graph, but I'm also looking to serialize to JSON or XML because I want to break up the records into downloadable chunks for the end user to import a la carte. I have the code written to do the serialization, but I'm wondering if I can serialize the object graph if I only have pieces in memory. Here's pseudocode showing the basic process for sqlite import. I left out the unnecessary detail. [database open]; [database beginTransaction]; NSArray *firstLevels = [[FirstLevel fetchFromURL:url retain]; for (FirstLevel *firstLevel in firstLevels) { [firstLevel save]; int id1 = [firstLevel primaryKey]; NSArray *secondLevels = [[SecondLevel fetchFromURL:url] retain]; for (SecondLevel *secondLevel in secondLevels) { [secondLevel saveWithForeignKey:id1]; int id2 = [secondLevel primaryKey]; NSArray *thirdLevels = [[ThirdLevel fetchFromURL:url] retain]; for (ThirdLevel *thirdLevel in thirdLevels) { [thirdLevel saveWithForeignKey:id2]; } [database commit]; [database beginTransaction]; [thirdLevels release]; } [secondLevels release]; } [database commit]; [database release]; [firstLevels release];

    Read the article

  • What is the general feeling about reflection extensions in std::type_info?

    - by Evan Teran
    I've noticed that reflection is one feature that developers from other languages find very lacking in c++. For certain applications I can really see why! It is so much easier to write things like an IDE's auto-complete if you had reflection. And certainly serialization APIs would be a world easier if we had it. On the other side, one of the main tenets of c++ is don't pay for what you don't use. Which makes complete sense. That's something I love about c++. But it occurred to me there could be a compromise. Why don't compilers add extensions to the std::type_info structure? There would be no runtime overhead. The binary could end up being larger, but this could be a simple compiler switch to enable/disable and to be honest, if you are really concerned about the space savings, you'll likely disable exceptions and RTTI anyway. Some people cite issues with templates, but the compiler happily generates std::type_info structures for template types already. I can imagine a g++ switch like -fenable-typeinfo-reflection which could become very popular (and mainstream libs like boost/Qt/etc could easily have a check to generate code which uses it if there, in which case the end user would benefit with no more cost than flipping a switch). I don't find this unreasonable since large portable libraries like this already depend on compiler extensions. So why isn't this more common? I imagine that I'm missing something, what are the technical issues with this?

    Read the article

  • Passing an object as parameter from Andriod to web service using ksoap

    - by user3718626
    I have an object called User which implements KvmSerializable. Would like to pass this object to the webservice. PropertyInfo pi = new PropertyInfo(); pi.setName("obj"); pi.setValue(user); pi.setType(user.getClass()); request.addProperty(pi); SoapSerializationEnvelope envelope = new SoapSerializationEnvelope(SoapEnvelope.VER11); envelope.setOutputSoapObject(request); envelope.addMapping(NAMESPACE, "User",User.class); HttpTransportSE androidHttpTransport = new HttpTransportSE(URL); I get the following error.... SoapFault - faultcode: 'soapenv:Server' faultstring: 'Unknow type {http://users.com}User' faultactor: 'null' detail: org.kxml2.kdom.Node@53263024 at org.ksoap2.serialization.SoapSerializationEnvelope.parseBody(SoapSerializationEnvelope.java:141) at org.ksoap2.SoapEnvelope.parse(SoapEnvelope.java:140) at org.ksoap2.transport.Transport.parseResponse(Transport.java:118) at org.ksoap2.transport.HttpTransportSE.call(HttpTransportSE.java:272) at org.ksoap2.transport.HttpTransportSE.call(HttpTransportSE.java:118) at org.ksoap2.transport.HttpTransportSE.call(HttpTransportSE.java:113) at com.compete.WebServiceCallTask.getQuestion(WebServiceCallTask.java:114) at com.compete.WebServiceCallTask.doInBackground(WebServiceCallTask.java:53) at com.compete.WebServiceCallTask.doInBackground(WebServiceCallTask.java:1) at android.os.AsyncTask$2.call(AsyncTask.java:287) at java.util.concurrent.FutureTask.run(FutureTask.java:234) at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:230) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1080) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:573) at java.lang.Thread.run(Thread.java:856) Appreciate if any one can point me to an sample code or can direct me what is the issue. Thanks.

    Read the article

  • Who architected / designed C++'s IOStreams, and would it still be considered well-designed by today'

    - by stakx
    First off, it may seem that I'm asking for subjective opinions, but that's not what I'm after. I'd love to hear some well-grounded arguments on this topic. In the hope of getting some insight into how a modern streams / serialization framework ought to be designed, I recently got myself a copy of the book Standard C++ IOStreams and Locales by Angelika Langer and Klaus Kreft. I figured that if IOStreams wasn't well-designed, it wouldn't have made it into the C++ standard library in the first place. After having read various parts of this book, I am starting to have doubts if IOStreams can compare to e.g. the STL from an overall architectural point-of-view. Read e.g. this interview with Alexander Stepanov (the STL's "inventor") to learn about some design decisions that went into the STL. What surprises me in particular: It seems to be unknown who was responsible for IOStreams' overall design (I'd love to read some background information about this — does anyone know good resources?); Once you delve beneath the immediate surface of IOStreams, e.g. if you want to extend IOStreams with your own classes, you get to an interface with fairly cryptic and confusing member function names, e.g. getloc/imbue, uflow/underflow, snextc/sbumpc/sgetc/sgetn, pbase/pptr/epptr (and there's probably even worse examples). This makes it so much harder to understand the overall design and how the single parts co-operate. Even the book I mentioned above doesn't help that much (IMHO). Thus my question: If you had to judge by today's software engineering standards (if there actually is any general agreement on these), would C++'s IOStreams still be considered well-designed? (I wouldn't want to improve my software design skills from something that's generally considered outdated.)

    Read the article

  • SharePoint extensions for VS - which version have I got?

    - by Javaman59
    I'm using Visual Studio 2008, and have downloaded VSeWSS.exe 1.2, to enable Web Part development. I am new to SP development, and am already bewildered by the number of different versions of SP, and the VS add-ons. This particular problem has come up, which highlights my confusion. I selected Add - New Project - Visual C# - SharePoint- Web Part, accepted the defaults, and VS created a project, with the main file WebPart1.cs using System; using System.Runtime.InteropServices; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using System.Xml.Serialization; using Microsoft.SharePoint; using Microsoft.SharePoint.WebControls; using Microsoft.SharePoint.WebPartPages; namespace WebPart1 { [Guid("9bd7e74c-280b-44d4-baa1-9271679657a0")] public class WebPart1 : System.Web.UI.WebControls.WebParts.WebPart { public WebPart1() { } protected override void CreateChildControls() // <-- This line { base.CreateChildControls(); // TODO: add custom rendering code here. // Label label = new Label(); // label.Text = "Hello World"; // this.Controls.Add(label); } } } The book I'm following, Essential SharePoint 2007, by Jeff Webb, has the following for the default project - using System; <...as previously> namespace WebPart1 { [Guid("9bd7e74c-280b-44d4-baa1-9271679657a0")] public class WebPart1 : System.Web.UI.WebControls.WebParts.WebPart { // ^ this is a new style (ASP.NET) web part! [author's comment] protected override void Render(HtmlTextWriter writer) // <-- This line { // This method draws the web part // TODO: add custom rendering code here. // Label label = new Label(); // writer.Write("Output HTML"); } } } The real reason for my concern is that in this chapter of the book the author frequently mentions the distinction between "old style" web parts, and "new style" web parts, as noted in his comment on the Render method. What's happened? Why has my default Web Part got a different signature to the authors?

    Read the article

  • How to store a list in a column of a database table.

    - by John Berryman
    Howdy! So, per Mehrdad's answer to a related question, I get it that a "proper" database table column doesn't store a list. Rather, you should create another table that effectively holds the elements of said list and then link to it directly or through a junction table. However, the type of list I want to create will be composed of unique items (unlike the linked question's fruit example). Furthermore, the items in my list are explicitly sorted - which means that if I stored the elements in another table, I'd have to sort them every time I accessed them. Finally, the list is basically atomic in that any time I wish to access the list, I will want to access the entire list rather than just a piece of it - so it seems silly to have to issue a database query to gather together pieces of the list. AKX's solution (linked above) is to serialize the list and store it in a binary column. But this also seems inconvenient because it means that I have to worry about serialization and deserialization. Is there any better solution? If there is no better solution, then why? It seems that this problem should come up from time to time. ... just a little more info to let you know where I'm coming from. As soon as I had just begun understanding SQL and databases in general, I was turned on to LINQ to SQL, and so now I'm a little spoiled because I expect to deal with my programming object model without having to think about how the objects are queried or stored in the database. Thanks All! John

    Read the article

  • segfault during __cxa_allocate_exception in SWIG wrapped library

    - by lefticus
    While developing a SWIG wrapped C++ library for Ruby, we came across an unexplained crash during exception handling inside the C++ code. I'm not sure of the specific circumstances to recreate the issue, but it happened first during a call to std::uncaught_exception, then after a some code changes, moved to __cxa_allocate_exception during exception construction. Neither GDB nor valgrind provided any insight into the cause of the crash. I've found several references to similar problems, including: http://wiki.fifengine.de/Segfault_in_cxa_allocate_exception http://forums.fifengine.de/index.php?topic=30.0 http://code.google.com/p/osgswig/issues/detail?id=17 https://bugs.launchpad.net/ubuntu/+source/libavg/+bug/241808 The overriding theme seems to be a combination of circumstances: A C application is linked to more than one C++ library More than one version of libstdc++ was used during compilation Generally the second version of C++ used comes from a binary-only implementation of libGL The problem does not occur when linking your library with a C++ application, only with a C application The "solution" is to explicitly link your library with libstdc++ and possibly also with libGL, forcing the order of linking. After trying many combinations with my code, the only solution that I found that works is the LD_PRELOAD="libGL.so libstdc++.so.6" ruby scriptname option. That is, none of the compile-time linking solutions made any difference. My understanding of the issue is that the C++ runtime is not being properly initialized. By forcing the order of linking you bootstrap the initialization process and it works. The problem occurs only with C applications calling C++ libraries because the C application is not itself linking to libstdc++ and is not initializing the C++ runtime. Because using SWIG (or boost::python) is a common way of calling a C++ library from a C application, that is why SWIG often comes up when researching the problem. Is anyone out there able to give more insight into this problem? Is there an actual solution or do only workarounds exist? Thanks.

    Read the article

  • Passing Object to Service in WCF

    - by hgulyan
    Hi, I have my custom class Customer with its properties. I added DataContract mark above the class and DataMember to properties and it was working fine, but I'm calling a service class's function, passing customer instance as parameter and some of my properties get 0 values. While debugging I can see my properties values and after it gets to the function, some properties' values are 0. Why it can be so? There's no code between this two actions. DataContract mark workes fine, everything's ok. Any suggestions on this issue? I tried to change ByRef to ByVal, but it doesn't change anything. Why it would pass other values right and some of integer types just 0? Maybe the answer is simple, but I can't figure it out. Thank You. <DataContract()> Public Class Customer Private Type_of_clientField As Integer = -1 <DataMember(Order:=1)> Public Property type_of_client() As Integer Get Return Type_of_clientField End Get Set(ByVal value As Integer) Type_of_clientField = value End Set End Property End Class <ServiceContract(SessionMode:=SessionMode.Allowed)> <DataContractFormat()> Public Interface CustomerService <OperationContract()> Function addCustomer(ByRef customer As Customer) As Long End Interface type_of_client properties value is 6 before I call addCustomer function. After it enters that function the value is 0. UPDATE: The issue is in instance creating. When I create an instance of a class on client side, that is stored on service side, some of my properties pass 0 or nothing, but when I call a function of a service class, that returns a new instance of that class, it works fine. What's is the difference? Could that be serialization issue?

    Read the article

  • Serialize Object

    - by Mark Pearl
    Hi.. I am new to serialization and I cannot for the life of me figure out how to fix this exception I am getting... I have an object that has the following structure [XmlRoot("MaxCut2")] public class MaxCut2File : IFile { public MaxCut2File() { MyJob = new Job(); Job.Reference = "MyRef"; } [XmlElement("JobDetails", typeof(Job))] public IJob MyJob { get; set; } } An inteferface... public interface IJob { string Reference { get; set; } } And an class [Serializable()] public class Job : IJob { [XmlElement("Reference")] public string Reference { get; set; } } When I try to serialize an instance of the MaxCut2File object I get an error {"Cannot serialize member 'MaxCut2File.MaxCut2File.MyJob' of type 'MaxCut2BL.Interfaces.IJob', see inner exception for more details."} "There was an error reflecting type 'MaxCut2File.MaxCut2File'." However if I change my property MyJob from the IJob type to the Job type it works fine... Any ideas?

    Read the article

  • Passing an ActionScript JPG Byte Array to Javscript (and eventually to PHP)

    - by Gus
    Our web application has a feature which uses Flash (AS3) to take photos using the user's web cam, then passes the resulting byte array to PHP where it is reconstructed and saved on the server. However, we need to be able to take this web application offline, and we have chosen Gears to do so. The user takes the app offline, performs his tasks, then when he's reconnected to the server, we "sync" the data back with our central database. We don't have PHP to interact with Flash anymore, but we still need to allow users to take and save photos. We don't know how to save a JPG that Flash creates in a local database. Our hope was that we could save the byte array, a serialized string, or somehow actually persist the object itself, then pass it back to either PHP or Flash (and then PHP) to recreate the JPG. We have tried: - passing the byte array to Javascript instead of PHP, but javascript doesn't seem to be able to do anything with it (the object seems to be stripped of its methods) - stringifying the byte array in Flash, and then passing it to Javascript, but we always get the same string: ÿØÿà Now we are thinking of serializing the string in Flash, passing it to Javascript, then on the return route, passing that string back to Flash which will then pass it to PHP to be reconstructed as a JPG. (whew). Since no one on our team has extensive Flash background, we're a bit lost. Is serialization the way to go? Is there a more realistic way to do this? Does anyone have any experience with this sort of thing? Perhaps we can build a javascript class that is the same as the byte array class in AS?

    Read the article

  • C++ Returning Multiple Items

    - by Travis Parks
    I am designing a class in C++ that extracts URLs from an HTML page. I am using Boost's Regex library to do the heavy lifting for me. I started designing a class and realized that I didn't want to tie down how the URLs are stored. One option would be to accept a std::vector<Url> by reference and just call push_back on it. I'd like to avoid forcing consumers of my class to use std::vector. So, I created a member template that took a destination iterator. It looks like this: template <typename TForwardIterator, typename TOutputIterator> TOutputIterator UrlExtractor::get_urls( TForwardIterator begin, TForwardIterator end, TOutputIterator dest); I feel like I am overcomplicating things. I like to write fairly generic code in C++, and I struggle to lock down my interfaces. But then I get into these predicaments where I am trying to templatize everything. At this point, someone reading the code doesn't realize that TForwardIterator is iterating over a std::string. In my particular situation, I am wondering if being this generic is a good thing. At what point do you start making code more explicit? Is there a standard approach to getting values out of a function generically?

    Read the article

  • Is there stl and utf8 friendly C++ Wrapper for ICU, or other powerful unicode library

    - by artyom
    Hello, I need a good Unicode library for C++. I need Transformations in Unicode sensitive way. For example sort all strings in case insensitive way and get their first characters for index. Convert to upper and to lower various Unicode strings. Split text in reasonable position -- words that would work for Chinese and Japanese as well. Formatting numbers, dates in locale sensitive way (should be thread safe). Transparent support of utf8 (primary internal representation). As far as I know the best library is ICU. However, I can't find normal developer friendly API documentation with examples. Also as far as I see, it is not too friendly with modern C++ design, work with STL and so on. Like this std::string msg; unistring umsg.from_utf8(msg); unistring::word_iterator wi; for(wi=umsg.words().begin(),n=0;wi!=usmg.words().wi_end(),n<10;++wi,++n) ; msg=umsg.substr(umsg.words().begin(),wi).to_utf8(); cout<<_("Five 10 words are ")<<msg; Does anybody know good STL friendly ICU wrapper released under Open Source license preferred permissive like MIT or Boost, but others LGPLv2 compatible are ok as well. Is there another high quality library similar to ICU? Platform: UNIX/POSIX, Windows support is not required. Thanks, Artyom Edit: Unfortunatly I wasn't logged in so I can't make asnver accepted... I had attached the ansver by myself.

    Read the article

< Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >