Search Results

Search found 16914 results on 677 pages for 'single threaded'.

Page 557/677 | < Previous Page | 553 554 555 556 557 558 559 560 561 562 563 564  | Next Page >

  • Hibernate noob fetch join problem

    - by Bruce
    Hi all I have two classes, Test2 and Test3. Test2 has an attribute test3 that is an instance of Test3. In other words, I have a unidirectional OneToOne association, with test2 having a reference to test3. When I select Test2 from the db, I can see that a separate select is being made to get the details of the associated test3 class. This is the famous 1+N selects problem. To fix this to use a single select, I am trying to use the fetch=join annotation, which I understand to be @Fetch(FetchMode.JOIN) However, with fetch set to join, I still see separate selects. Here are the relevant portions of my setup.. hibernate.cfg.xml: <property name="max_fetch_depth">2</property> Test2: public class Test2 { @OneToOne (cascade=CascadeType.ALL , fetch=FetchType.EAGER) @JoinColumn (name="test3_id") @Fetch(FetchMode.JOIN) public Test3 getTest3() { return test3; } NB I set the FetchType to EAGER out of desperation, even though it defaults to EAGER anyway for OneToOne mappings, but it made no difference. Thanks for any help! Edit: I've pretty much given up on trying to use FetchMode.JOIN - can anyone confirm that they have got it to work ie produce a left outer join? In the docs I see that "Usually, the mapping document is not used to customize fetching. Instead, we keep the default behavior, and override it for a particular transaction, using left join fetch in HQL" If I do a left join fetch instead: query = session.createQuery("from Test2 t2 left join fetch t2.test3"); then I do indeed get the results I want - ie a left outer join in the query.

    Read the article

  • Discussion on SEO best-practices for site development involving php...

    - by Bradley Herman
    Recently in our work, I've started getting some experience with SEO (finally). It's something I've put off for a long time because I've always maintained that SEO is a buzz-word b.s. pseudo-science and more about providing quality, relevant content (assuming proper header tags and the basics are covered). However, sometimes a client doesn't have stellar content yet still demands SEO and high rankings. While it's not how I design sites 100% of the time (as design dictates structure), I typically create a basic template from the design my boss gives me, then I optimize it, and then strip the top and bottom and move those to header.php and footer.php, using the following to bring in the header and footer based on AJAX versus HTML requests: <?php if($_SERVER['HTTP_X_REQUESTED_WITH']==''){ include('includes/header.php'); }?> #content here <?php if($_SERVER['HTTP_X_REQUESTED_WITH']==''){ include('includes/footer.php'); }?> Then, I use jQuery to intercept page requests and I use AJAX to fill in, for example, a #copy div with the new content. This avoids unnecessarily loading all the header and footer info everytime, but still allows users without Java to access pages without any problems. (also to think about, depending on size of content, do the extra http requests added using this method render it more of a server strain versus a single, larger file?) I don't have a really solid understanding of the meta keywords and their SEO significance, but as I recall reading, the keywords, title, and description on a page should match up to the pages content--ie. each page should have slightly different keywords/description while retaining some common ground. What I'm getting at here is trying to foster a discussion on whether my approach is flawed to begin with, if there are things I can do (within reason) that keep the site structure simple but allow for better SEO practices, or if my SEO understandings are wrong. This isn't a question, per say, but hopefully a constructive discussion here that more than just I can learn from. I appreciate any responses and hope to hear from you. Thanks!

    Read the article

  • Does Android AsyncTaskQueue or similar exist?

    - by Ben L.
    I read somewhere (and have observed) that starting threads is slow. I always assumed that AsyncTask created and reused a single thread because it required being started inside the UI thread. The following (anonymized) code is called from a ListAdapter's getView method to load images asynchronously. It works well until the user moves the list quickly, and then it becomes "janky". final File imageFile = new File(getCacheDir().getPath() + "/img/" + p.image); image.setVisibility(View.GONE); view.findViewById(R.id.imageLoading).setVisibility(View.VISIBLE); (new AsyncTask<Void, Void, Bitmap>() { @Override protected Bitmap doInBackground(Void... params) { try { Bitmap image; if (!imageFile.exists() || imageFile.length() == 0) { image = BitmapFactory.decodeStream(new URL( "http://example.com/images/" + p.image).openStream()); image.compress(Bitmap.CompressFormat.JPEG, 85, new FileOutputStream(imageFile)); image.recycle(); } image = BitmapFactory.decodeFile(imageFile.getPath(), bitmapOptions); return image; } catch (MalformedURLException ex) { // TODO Auto-generated catch block ex.printStackTrace(); return null; } catch (IOException ex) { // TODO Auto-generated catch block ex.printStackTrace(); return null; } } @Override protected void onPostExecute(Bitmap image) { if (view.getTag() != p) // The view was recycled. return; view.findViewById(R.id.imageLoading).setVisibility( View.GONE); view.findViewById(R.id.image) .setVisibility(View.VISIBLE); ((ImageView) view.findViewById(R.id.image)) .setImageBitmap(image); } }).execute(); I'm thinking that a queue-based method would work better, but I'm wondering if there is one or if I should attempt to create my own implementation.

    Read the article

  • Duplicate elements when adding XElement to XDocument

    - by Andy
    I'm writing a program in C# that will go through a bunch of config.xml files and update certain elements, or add them if they don't exist. I have the portion down that updates an element if it exists with this code: XDocument xdoc = XDocument.Parse(ReadFile(_file)); XElement element = xdoc.Elements("project").Elements("logRotator") .Elements("daysToKeep").Single(); element.Value = _DoRevert; But I'm running into issues when I want to add an element that doesn't exist. Most of the time part of the tree is in place and when I use my code it adds another identical tree, and that causes the program reading the xml to blow up. here is how I am attempting to do it xdoc.Element("project").Add(new XElement("logRotator", new XElement("daysToKeep", _day))); and that results in a structure like this(The numToKeep tag was already there): <project> <logRotator> <daysToKeep>10</daysToKeep> </logRotator> <logRotator> <numToKeep>13</numToKeep> </logRotator> </project> but this is what I want <project> <logRotator> <daysToKeep>10</daysToKeep> <numToKeep>13</numToKeep> </logRotator> </project>

    Read the article

  • How To Go About Updating Old C Code

    - by Ben313
    Hello: I have been working on some 10 year old C code at my job this week, and after implementing a few changes, I went to the boss and asked if he needed anything else done. Thats when he dropped the bomb. My next task was to go through the 7000 or so lines and understand more of the code, AND, to modularize the code somewhat. I asked him how he would like the source code modularized, and he said to start putting the old C code into c++ classes. Being a good worker, I nodded my head yes, and went back to my desk, where I sit now, wondering how in the world to take this code, and "modularize" it. Its already in 20 source files, each with its own purpose and function. in addition, there are three "main" structs. each of these stuctures has 30 plus fields, many of them being other, smaller sturcts. Its a complete mess to try to understand, but almost every single function in the program is passed a pointer to one of the structs, and uses the struct heavily. Is there any clean way for me to shoehorn this into classes? I am resolved to do it if it can be done, I just have no idea how to begin.

    Read the article

  • How can a link within a WebView load another layout using javascript?

    - by huffmaster
    So I have 2 layout files (main.xml, featured.xml) and both each have a single WebView. When the application starts "main.xml" loads a html file into it's WebView. In this html file I have a link that calls javascript that runs code in the Activity that loaded the html. Once back in this Activity code though I try running setContentView(R.layout.featured) but it just bombs out on me. If I debug it just dies without any real error and if I run it the application just Force closes. Am I going about this correctly or should I be doing something differently? final private int MAIN = 1; final private int FEATURED = 2; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); webview = (WebView) findViewById(R.id.wvMain); webview.getSettings().setJavaScriptEnabled(true); webview.getSettings().setSupportZoom(false); webview.addJavascriptInterface(new EHJavaScriptInterface(), "eh"); webview.loadUrl("file:///android_asset/default.html"); } final class EHJavaScriptInterface { EHJavaScriptInterface() { } public void loadLayout(final String lo) { int i = Integer.parseInt(lo.trim()); switch (i) { /****** THIS IS WHERE I'M BOMBING OUT *********/ case FEATURED: setContentView(R.layout.featured);break; case MAIN: setContentView(R.layout.main);break; } } }

    Read the article

  • Finding Palindromes in an Array

    - by Jack L.
    For this assignemnt, I think that I got it right, but when I submit it online, it doesn't list it as correct even though I checked with Eclipse. The prompt: Write a method isPalindrome that accepts an array of Strings as its argument and returns true if that array is a palindrome (if it reads the same forwards as backwards) and /false if not. For example, the array {"alpha", "beta", "gamma", "delta", "gamma", "beta", "alpha"} is a palindrome, so passing that array to your method would return true. Arrays with zero or one element are considered to be palindromes. My code: public static void main(String[] args) { String[] input = new String[6]; //{"aay", "bee", "cee", "cee", "bee", "aay"} Should return true input[0] = "aay"; input[1] = "bee"; input[2] = "cee"; input[3] = "cee"; input[4] = "bee"; input[5] = "aay"; System.out.println(isPalindrome(input)); } public static boolean isPalindrome(String[] input) { for (int i=0; i<input.length; i++) { // Checks each element if (input[i] != input[input.length-1-i]){ return false; // If a single instance of non-symmetry } } return true; // If symmetrical, only one element, or zero elements } As an example, {"aay", "bee", "cee", "cee", "bee", "aay"} returns true in Eclipse, but Practice-It! says it returns false. What is going on?

    Read the article

  • Adapter for circle page indicator in android

    - by Charles LAU
    I am currently working on an android application which have multiple pages. I am trying to use Circle page indicator to allow users view multiple pages by flipping over the screen. Each page has seperate XML file for the view and each page has a button which is bind to a java method in the Activity. I would like to know how to initalise all the buttons in the Activity for multiple pages. Because at the moment, I can only initalise the button for the first page of the views. I cannot initalise the button for second and third page. Does anyone know how to achieve this. I have placed all the jobs to be done for all the buttons in a single activity. I am currently using this indicator : http://viewpagerindicator.com/ Here is my adapter for the circle page indicator: @Override public Object instantiateItem(View collection, int position) { inflater = (LayoutInflater) collection.getContext().getSystemService(Context.LAYOUT_INFLATER_SERVICE); int resid = 0; //View v = null;// inflater.inflate( R.layout.gaugescreen, (ViewPager)collection, false ); switch( position ) { case 0: resid = R.layout.gaugescreen; break; case 1: resid= R.layout.liveworkoutstatisticsscreen; break; case 2: resid = R.layout.mapscreen; break; default: resid = R.layout.gaugescreen; break; } View view = inflater.inflate(resid, null); ((ViewPager) collection).addView(view,0); return view; } Does anyone know how to achieve this? Thanks for any help in advance

    Read the article

  • Converting an AnsiString to a Unicode String

    - by jrodenhi
    I'm converting a D2006 program to D2010. I have a value stored in a single byte per character string in my database and I need to load it into a control that has a LoadFromStream, so my plan was to write the string to a stream and use that with LoadFromStream. But it did not work. In studying the problem, I see an issue that tells me that I don't really understand how conversion from AnsiString to Unicode string works. Here is some code I am puzzling over: oStringStream := TStringStream.Create(sBuffer); sUnicodeStream := oPayGrid.sStream; //explicit conversion to unicode string iSize1 := StringElementSize(oPaygrid.sStream); iSize2 := StringElementSize(sUnicodeStream); oStringStream.WriteString(sUnicodeStream); When I get to the last line, iSize1 does equal 1 and iSize2 does equal 2, so that part is what I understood from my reading. But, on the last line, after I write the string to the stream, and look at the Bytes Property of the string, it shows this (the string starts as '16,159'): (49 {$31}, 54 {$36}, 44 {$2C}, 49 {$31}, 53 {$35}, 57 {$39} ... I was expecting that it might look something like (49 {$31}, 00 {$00}, 54 {$36}, 00 {$00}, 44 {$2C}, 00 {$00}, 49 {$31}, 00 {$00}, 53 {$35}, 00 {$00}, 57 {$39}, 00 {$00} ... I'm not getting the right results out of the LoadFromStream because it is reading from the stream two bytes at a time, but the data it is receiving is not arranged that way. What is it that I should do to give the LoadFromStream a well formed stream of data based on a unicode string? Thank you for your help.

    Read the article

  • jquery - array problem help pls.

    - by russp
    Sorry folks, I really need help with posting an array problem. I would imaging it's quite simple, but beyond me. I have this JQuery function (using sortables) $(function() { $("#col1, #col2, #col3, #col4").sortable({ connectWith: '.column', items: '.portlet:not(.ui-state-disabled)', stop : function () { serial_1 = $('#col1').sortable('serialize'); serial_2 = $('#col2').sortable('serialize'); serial_3 = $('#col3').sortable('serialize'); serial_4 = $('#col4').sortable('serialize'); } }); }); Now I can post it to a database like this, and I can loop this ajax through all 4 "serials" $.ajax({ url: "test.php", type: "post", data: serial_1, error: function(){ alert(testit); } }); But that is not what I want to do as it creates 4 rows in the DB table. I want/need to create a single "nested array" from the 4 serials so that it enters the DB as 1 (one) row. My "base" database data looks like this: a:4:{s:4:"col1";a:3:{i:1;s:6:"forums";i:2;s:4:"chat";i:3;s:5:"blogs";}s:4:"col2";a:2:{i:1;s:5:"pages";i:2;s:7:"members";}s:4:"col3";a:2:{i:1;s:9:"galleries";i:2;s:4:"shop";}s:4:"col4";a:1:{i:1;s:4:"news";}} Therefore the JQuery array should "replicate" and create it (obviously will change on sorting) Help please thanks in advance

    Read the article

  • Reverse search in Hibernate Search

    - by Javi
    Hello, I'm using Hibernate Search (which uses Lucene) for searching some Data I have indexed in a directory. It works fine but I need to do a reverse search. By reverse search I mean that I have a list of queries stored in my database I need to check which one of these queries match with a Data object each time Data Object is created. I need it to alert the user when a Data Object matches with a Query he has created. So I need to index this single Data Object which has just been created and see which queries of my list has this object as a result. I've seen Lucene MemoryIndex Class to create an index in memory so I can do something like this example for every query in a list (though iterating in a Java list of queries would not be very efficient): //Iterating over my list<Query> MemoryIndex index = new MemoryIndex(); //Add all fields index.addField("myField", "myFieldData", analyzer); ... QueryParser parser = new QueryParser("myField", analyzer); float score = index.search(query); if (score > 0.0f) { System.out.println("it's a match"); } else { System.out.println("no match found"); } The problem here is that this Data Class has several Hibernate Search Annotations @Field,@IndexedEmbedded,... which indicated how fields should be indexed, so when I invoke index() method on the FullTextEntityManager instance it uses this information to index the object in the directory. Is there a similar way to index it in memory using this information? Is there a more efficient way of doing this reverse search? Thanks

    Read the article

  • What happens when value types are created?

    - by Bob
    I'm developing a game using XNA and C# and was attempting to avoid calling new struct() type code each frame as I thought it would freak the GC out. "But wait," I said to myself, "struct is a value type. The GC shouldn't get called then, right?" Well, that's why I'm asking here. I only have a very vague idea of what happens to value types. If I create a new struct within a function call, is the struct being created on the stack? Will it simply get pushed and popped and performance not take a hit? Further, would there be some memory limit or performance implications if, say, I need to create many instances in a single call? Take, for instance, this code: spriteBatch.Draw(tex, new Rectangle(x, y, width, height), Color.White); Rectangle in this case is a struct. What happens when that new Rectangle is created? What are the implications of having to repeat that line many times (say, thousands of times)? Is this Rectangle created, a copy sent to the Draw method, and then discarded (meaning no memory getting eaten up the more Draw is called in that manner in the same function)? P.S. I know this may be pre-mature optimization, but I'm mostly curious and wish to have a better understanding of what is happening.

    Read the article

  • Delphi: Error when starting MCI

    - by marco92w
    I use the TMediaPlayer component for playing music. It works fine with most of my tracks. But it doesn't work with some tracks. When I want to play them, the following error message is shown: Which is German but roughly means that: In the project pMusicPlayer.exe an exception of the class EMCIDeviceError occurred. Message: "Error when starting MCI.". Process was stopped. Continue with "Single Command/Statement" or "Start". The program quits directly after calling the procedure "Play" of TMediaPlayer. This error occurred with the following file for example: file size: 7.40 MB duration: 4:02 minutes bitrate: 256 kBit/s I've encoded this file with a bitrate of 128 kBit/s and thus a file size of 3.70 MB: It works fine! What's wrong with the first file? Windows Media Player or other programs can play it without any problems. Is it possible that Delphi's TMediaPlayer cannot handle big files (e.g. 5 MB) or files with a high bitrate (e.g. 128 kBit/s)? What can I do to solve the problem?

    Read the article

  • Lights off effect and jquery placement on wordpress

    - by Alexander Santiago
    I'm trying to implement a lights on/off on single posts of my wordpress theme. I know that I have to put this code on my css, which I did already: #the_lights{ background-color:#000; height:1px; width:1px; position:absolute; top:0; left:0; display:none; } #standout{ padding:5px; background-color:white; position:relative; z-index:1000; } Now this is the code that I'm having trouble with: function getHeight() { if ($.browser.msie) { var $temp = $("").css("position", "absolute") .css("left", "-10000px") .append($("body").html()); $("body").append($temp); var h = $temp.height(); $temp.remove(); return h; } return $("body").height(); } $(document).ready(function () { $("#the_lights").fadeTo(1, 0); $("#turnoff").click(function () { $("#the_lights").css("width", "100%"); $("#the_lights").css("height", getHeight() + "px"); $("#the_lights").css({‘display’: ‘block’ }); $("#the_lights").fadeTo("slow", 1); }); $("#soft").click(function () { $("#the_lights").css("width", "100%"); $("#the_lights").css("height", getHeight() + "px"); $("#the_lights").css("display", "block"); $("#the_lights").fadeTo("slow", 0.8); }); $("#turnon").click(function () { $("#the_lights").css("width", "1px"); $("#the_lights").css("height", "1px"); $("#the_lights").css("display", "block"); $("#the_lights").fadeTo("slow", 0); }); }); I think it's a jquery. Where do I place it and how do I call it's function? Been stuck on this thing for 6 hours now and any help would be greatly appreciated...

    Read the article

  • Characters in usernames that cause trouble

    - by acidzombie24
    I am testing out security and reliability issues on my site. I have made \n and \r illegal. I created a user with null in the name which caused my PM system to not message the user. However \b worked and \t didnt allow copy/paste to work correctly. The browser (firefox which i am testing with) copied the tab as a single space causing the name not to be the same thus not recognizing the username. Since i cant copy paste easily i'll probably disallow it. \f works as well although i do see a symbol in the title but nowhere else because of the \f. What else should i try? It appears 0-31 127-159 (i dont understand this range) are illegal. What characters in legal range might i want to disallow? I heard there was a 0 width character space. That may be something i want to disallow? What else is there?

    Read the article

  • Float addition promoted to double?

    - by Andreas Brinck
    I had a small WTF moment this morning. Ths WTF can be summarized with this: float x = 0.2f; float y = 0.1f; float z = x + y; assert(z == x + y); //This assert is triggered! (Atleast with visual studio 2008) The reason seems to be that the expression x + y is promoted to double and compared with the truncated version in z. (If i change z to double the assert isn't triggered). I can see that for precision reasons it would make sense to perform all floating point arithmetics in double precision before converting the result to single precision. I found the following paragraph in the standard (which I guess I sort of already knew, but not in this context): 4.6.1. "An rvalue of type float can be converted to an rvalue of type double. The value is unchanged" My question is, is x + y guaranteed to be promoted to double or is at the compiler's discretion? UPDATE: Since many people has claimed that one shouldn't use == for floating point, I just wanted to state that in the specific case I'm working with, an exact comparison is justified. Floating point comparision is tricky, here's an interesting link on the subject which I think hasn't been mentioned.

    Read the article

  • How much effort do you have to put in to get gains from using SSE?

    - by John
    Case One Say you have a little class: class Point3D { private: float x,y,z; public: operator+=() ...etc }; Point3D &Point3D::operator+=(Point3D &other) { this->x += other.x; this->y += other.y; this->z += other.z; } A naive use of SSE would simply replace these function bodies with using a few intrinsics. But would we expect this to make much difference? MMX used to involve costly state cahnges IIRC, does SSE or are they just like other instructions? And even if there's no direct "use SSE" overhead, would moving the values into SSE registers and back out again really make it any faster? Case Two Instead, you're working with a less OO-based code base. Rather than an array/vector of Point3D objects, you simply have a big array of floats: float coordinateData[NUM_POINTS*3]; void add(int i,int j) //yes it's unsafe, no overlap check... example only { for (int x=0;x<3;++x) { coordinateData[i*3+x] += coordinateData[j*3+x]; } } What about use of SSE here? Any better? In conclusion Is trying to optimise single vector operations using SSE actually worthwhile, or is it really only valuable when doing bulk operations?

    Read the article

  • Is it possible to wrap an asynchronous event and its callback in a function that returns a boolean?

    - by Rob Flaherty
    I'm trying to write a simple test that creates an image element, checks the image attributes, and then returns true/false. The problem is that using the onload event makes the test asynchronous. On it own this isn't a problem (using a callback as I've done in the code below is easy), but what I can't figure out is how to encapsulate this into a single function that returns a boolean. I've tried various combinations of closures, recursion, and self-executing functions but have had no luck. So my question: am I being dense and overlooking something simple, or is this in fact not possible, because, no matter what, I'm still trying to wrap an asynchronous function in synchronous expectations? Here's the code: var supportsImage = function(callback) { var img = new Image(); img.onload = function() { //Check attributes and pass true or false to callback callback(true); }; img.src = 'data:image/gif;base64,R0lGODlhAQABAAD/ACwAAAAAAQABAAACADs='; }; supportsImage(function(status){ console.log(status); }); To be clear, what I want is to be able to wrap this in something such that it can be used like: if (supportsImage) { //Do some crazy stuff } Thanks! (Btw, I know there are a ton of SO questions regarding confusion about synchronous vs. asynchronous. Apologies if this can be reduced to something previously answered.)

    Read the article

  • Access 2007 and Special/Unicode Characters in SQL

    - by blockcipher
    I have a small Access 2007 database that I need to be able to import data from an existing spreadsheet and put it into our new relational model. For the most part this seems to work pretty well. Part of the process is attempting to see if a record already exists in a target table using SQL. For example, if I extract book information out of the current row in the spreadsheet, it may contain a title and abstract. I use SQL to get the ID of a matching record, if it exists. This works fine except when I have data that's in a non-English language. In this case, it seems that there is some punctuation that is causing me problems. At least I think it's punctuation as I do have some fields that do not have punctuation and are non-English that do not give me any problems. Is there a built-in function that can escape these characters? Currently I have a small function that will escape the single quote character, but that isn't enough. Or, is there a list of Unicode characters that can interfere with how SQL wants data quoted? Thanks in advance.

    Read the article

  • Divide and conquer of large objects for GC performance

    - by Aperion
    At my work we're discussing different approaches to cleaning up a large amount of managed ~50-100MB memory.There are two approaches on the table (read: two senior devs can't agree) and not having the experience the rest of the team is unsure of what approach is more desirable, performance or maintainability. The data being collected is many small items, ~30000 which in turn contains other items, all objects are managed. There is a lot of references between these objects including event handlers but not to outside objects. We'll call this large group of objects and references as a single entity called a blob. Approach #1: Make sure all references to objects in the blob are severed and let the GC handle the blob and all the connections. Approach #2: Implement IDisposable on these objects then call dispose on these objects and set references to Nothing and remove handlers. The theory behind the second approach is since the large longer lived objects take longer to cleanup in the GC. So, by cutting the large objects into smaller bite size morsels the garbage collector will processes them faster, thus a performance gain. So I think the basic question is this: Does breaking apart large groups of interconnected objects optimize data for garbage collection or is better to keep them together and rely on the garbage collection algorithms to processes the data for you? I feel this is a case of pre-optimization, but I do not know enough of the GC to know what does help or hinder it.

    Read the article

  • Looking for all-in-one drm/installer/CD creation kit.

    - by user30997
    The company I work for has a download manager in place that handles distribution, DRM, installation of our products - when a user gets them off our website. However, we're using an clunky system for packaging and protecting our products when we do press releases or make retail CDs. Part of the antiquation problem is the fact that the automated system that works with the installer- and DRM-creation software we have is a disaster that needs to be put out of my misery. The list of products that we currently produce, that I would like a new system MUST be capable of producing: Retail CDs, with a certain level of obfuscation to make copying difficult. Downloadable installers that time out after a few hours of use of the product. After the time has expired, removing and reinstalling the product will leave you still blocked from use. Installers that will fail to work after a certain date. I'd love to be able to just feed a tool the directory where a complete product resides and have the installer generated with a couple command-line operations. (The command-line issue is non-negotiable this well be called by an automated tool.) A single-solution package would be far preferable. Software with royalty-based or per-unit based licensing is not an option.

    Read the article

  • What can cause my code to run slower when the server JIT is activated?

    - by durandai
    I am doing some optimizations on an MPEG decoder. To ensure my optimizations aren't breaking anything I have a test suite that benchmarks the entire codebase (both optimized and original) as well as verifying that they both produce identical results (basically just feeding a couple of different streams through the decoder and crc32 the outputs). When using the "-server" option with the Sun 1.6.0_18, the test suite runs about 12% slower on the optimized version after warmup (in comparison to the default "-client" setting), while the original codebase gains a good boost running about twice as fast as in client mode. While at first this seemed to be simply a warmup issue to me, I added a loop to repeat the entire test suite multiple times. Then execution times become constant for each pass starting at the 3rd iteration of the test, still the optimized version stays 12% slower than in the client mode. I am also pretty sure its not a garbage collection issue, since the code involves absolutely no object allocations after startup. The code consists mainly of some bit manipulation operations (stream decoding) and lots of basic floating math (generating PCM audio). The only JDK classes involved are ByteArrayInputStream (feeds the stream to the test and excluding disk IO from the tests) and CRC32 (to verify the result). I also observed the same behaviour with Sun JDK 1.7.0_b98 (only that ist 15% instead of 12% there). Oh, and the tests were all done on the same machine (single core) with no other applications running (WinXP). While there is some inevitable variation on the measured execution times (using System.nanoTime btw), the variation between different test runs with the same settings never exceeded 2%, usually less than 1% (after warmup), so I conclude the effect is real and not purely induced by the measuring mechanism/machine. Are there any known coding patterns that perform worse on the server JIT? Failing that, what options are available to "peek" under the hood and observe what the JIT is doing there?

    Read the article

  • confused about how to use JSON in C#

    - by Josh
    The answer to just about every single question about using C# with json seems to be "use JSON.NET" but that's not the answer I'm looking for. the reason I say that is, from everything I've been able to read in the documentation, JSON.NET is basically just a better performing version of the DataContractSerializer built into the .net framework... Which means if I want to deserialize a JSON string, I have to define the full, strongly-typed class for EVERY request I might have. so if I have a need to get categories, posts, authors, tags, etc, I have to define a new class for every one of these things. This is fine if I built the client and know exactly what the fields are, but I'm using someone else's api, so I have no idea what the contract is unless I download a sample response string and create the class manually from the JSON string. Is that the only way it's done? Is there not a way to have it create a kind of hashtable that can be read with json["propertyname"]? Finally, if I do have to build the classes myself, what happens when the API changes and they don't tell me (as twitter seems to be notorious for doing)? I'm guessing my entire project will break until I go in and update the object properties... So what exactly is the general workflow when working with JSON? And by general I mean library-agnostic. I want to know how it's done in general, not specifically to a target library... I hope that made sense, this has been a very confusing area to get into... thanks!

    Read the article

  • R : remove columns from dataframe where ALL values are NA

    - by Sophomore
    hello everybody! I'm having some trouble with my huge data frame and couldn't really resolve that question myself: The dataframe has some properties as columns and each row represents one data set. I've done some sanatizing to this dataframe (e.g. get rid of datasets which are not to be included in evaluation). (Whoever might be interested: Beforehand I aggregate around 5000 single text files and put them in a tsv, some of the proerties have a sequence number like "button.pressed.1" ... ""button.pressed.n". Some of the sets excluded had really high numbers for n but got excluded, all sets left have much smaller numbers for n but the property "button.presed.50" is still there and all remaining sets have an NA in that column. Actually its a different property but the example should clarify my intention...) So the question is quite simple (for some sophisticated R pro): I need to get rid of columns where for ALL rows the value is NA. Could someone please help me out? (All I have managed to get rid of columns where at least one NA exists which dropped about half my columns)...

    Read the article

  • Hierarchical/Nested Database Structure for Comments

    - by Stephen Melrose
    Hi, I'm trying to figure out the best approach for a database schema for comments. The problem I'm having is that the comments system will need to allow nested/hierarchical comments, and I'm not sure how to design this out properly. My requirements are, Comments can be made on comments, so I need to store the tree hierarchy I need to be able to query the comments in the tree hierarchy order, but efficiently, preferably in a fast single query, but I don't know if this is possible I'd need to make some wierd queries, e.g. pull out the latest 5 root comments, and a maximum of 3 children for each one of those I read an article on the MySQL website on this very subject, http://dev.mysql.com/tech-resources/articles/hierarchical-data.html The "Nested Set Model" in theory sounds like it will do what I need, except I'm worried about querying the thing, and also inserting. If this is the right approach, How would I do my 3rd requirement above? If I have 2000 comments, and I add a new sub-comment on the first comment, that will be a LOT of updating to do. This doesn't seem right to me? Or is there a better approach for the type of data I'm wanting to store and query? Thank you

    Read the article

< Previous Page | 553 554 555 556 557 558 559 560 561 562 563 564  | Next Page >