Search Results

Search found 18029 results on 722 pages for 'stripe size'.

Page 595/722 | < Previous Page | 591 592 593 594 595 596 597 598 599 600 601 602  | Next Page >

  • Disable chaching in JPA (eclipselink)

    - by James
    Hi, I want to use JPA (eclipselink) to get data from my database. The database is changed by a number of other sources and I therefore want to go back to the database for every find I execute. I have read a number of posts on disabling the cache but this does not seem to be working. Any ideas? I am trying to execute the following code: EntityManagerFactory entityManagerFactory = Persistence.createEntityManagerFactory("default"); EntityManager em = entityManagerFactory.createEntityManager(); MyLocation one = em.createNamedQuery("MyLocation.findMyLoc").getResultList().get(0); MyLocation two = em.createNamedQuery("MyLocation.findMyLoc").getResultList().get(0); System.out.println(one==two); one==two is true while I want it to be false. I have tried adding each/all the following to my persistence.xml <property name="eclipselink.cache.shared.default" value="false"/> <property name="eclipselink.cache.size.default" value="0"/> <property name="eclipselink.cache.type.default" value="None"/> I have also tried adding the @Cache annotation to the Entity itself: @Cache( type=CacheType.NONE, // Cache nothing expiry=0, alwaysRefresh=true ) Am I misunderstanding something? Thank you, James

    Read the article

  • OutOfMemoryException

    - by Andrew
    I have an application that is pretty memory hungry. It holds a large amount of data in some big arrays. I have recently been noticing the occasional OutOfMemoryException. These OutOfMemoryExceptions are occurring long before my application (ASP.Net) has used up the 800mb available to it. I have track the issue down to the area of code where the array is resized. The array contains a structure that is 74bytes in size. (I know that you shouldn't create struct's that are bigger than 16bytes), but this application is a port from a Vb6 application). I have tried changing the struct to a class and this appears to have fixed the problem for now. I think the reason that changing to a class solves the problem has to do with the fact that when using a struct and the array is resized, a segment of memory that is large enough to store the new array needs to be reserved (e.g. (currentArraySize + increaseBySize)*74) cannot be found. This leads to the OutOfMemoryException. This isn't the case with a class as each element of the array only needs 8bytes to store a pointer to the new object. Is my thinking correct here?

    Read the article

  • Can't start the portable version of NetBeans 6.9.1 IDE

    - by Coder
    I downloaded the portable version of netbeans netbeans-6.9.1-201007282301-ml.zip from the netbeans site and changed the config file in etc/netbeans.conf as indicated on the netbeans site. The file contents are below. # ${HOME} will be replaced by JVM user.home system property #netbeans_default_userdir="${HOME}/.netbeans/6.9" netbeans_default_userdir=".netbeans/6.9" # Options used by NetBeans launcher by default, can be overridden by explicit # command line switches: netbeans_default_options="-J-client -J-Xss2m -J-Xms32m -J-XX:PermSize=32m -J-XX:MaxPermSize=200m -J-Dapple.laf.useScreenMenuBar=true -J-Dapple.awt.graphics.UseQuartz=true -J-Dsun.java2d.noddraw=true" # Note that a default -Xmx is selected for you automatically. # You can find this value in var/log/messages.log file in your userdir. # The automatically selected value can be overridden by specifying -J-Xmx here # or on the command line. # If you specify the heap size (-Xmx) explicitely, you may also want to enable # Concurrent Mark & Sweep garbage collector. In such case add the following # options to the netbeans_default_options: # -J-XX:+UseConcMarkSweepGC -J-XX:+CMSClassUnloadingEnabled -J-XX:+CMSPermGenSweepingEnabled # (see http://wiki.netbeans.org/wiki/view/FaqGCPauses) # Default location of JDK, can be overridden by using --jdkhome <dir>: #netbeans_jdkhome="/path/to/jdk" netbeans_jdkhome="C:\Program Files\Java\jdk1.6.0_24\" # Additional module clusters, using ${path.separator} (';' on Windows or ':' on Unix): #netbeans_extraclusters="/absolute/path/to/cluster1:/absolute/path/to/cluster2" # If you have some problems with detect of proxy settings, you may want to enable # detect the proxy settings provided by JDK5 or higher. # In such case add -J-Djava.net.useSystemProxies=true to the netbeans_default_options. But it refuses to start when i try to run it. If i change the JDK path to something incorrect it complains that it can't find the jdk so i think the jdk path is correct. It also creates a .netbeans directory when i try to start it. I don't see any errors and it just doesn't do anything else observable. Does anybody know how to set up this version of netbeans? Thanks.

    Read the article

  • How to suggest changes as a recently-hired employee ?

    - by ereOn
    Hi, I was recently hired in a big company (thousands of people, to give an idea of the size). They said they hired me because of my rigor and because I was, despite my youngness (i'm 25), experienced as a C/C++ programer. Now that I'm in, I can see that the whole system is old and often uses obsolete technologies. There is no naming convention (files, functions, variables, ...), they don't use Version Control, don't use exceptions or polymorphism and it seems like almost everybody lost his passion (some of them are only 30 years old). I'd like to suggest somes changes but i don't want to be "the new guy that wants to change everything just because he doesn't want to fit in". I tried to "fit in", but actually, It takes me one week to do what I would do in one afternoon, just because of the poor tools we're forced to use. A lot my collegues never look at the new "things" and techniques that people use nowadays. It's like they just given up. The situation is really frustrating. Have you ever been in a similar situation and, if so, what advices would you give me ? Is there a subtle way of changing things without becoming the black sheep here ? Or should I just give up my passion and energy as well ? Thank you.

    Read the article

  • Rijndael managed: plaintext length detction

    - by sheepsimulator
    I am spending some time learning how to use the RijndaelManaged library in .NET, and developed the following function to test encrypting text with slight modifications from the MSDN library: Function encryptBytesToBytes_AES(ByVal plainText As Byte(), ByVal Key() As Byte, ByVal IV() As Byte) As Byte() ' Check arguments. If plainText Is Nothing OrElse plainText.Length <= 0 Then Throw New ArgumentNullException("plainText") End If If Key Is Nothing OrElse Key.Length <= 0 Then Throw New ArgumentNullException("Key") End If If IV Is Nothing OrElse IV.Length <= 0 Then Throw New ArgumentNullException("IV") End If ' Declare the RijndaelManaged object ' used to encrypt the data. Dim aesAlg As RijndaelManaged = Nothing ' Declare the stream used to encrypt to an in memory ' array of bytes. Dim msEncrypt As MemoryStream = Nothing Try ' Create a RijndaelManaged object ' with the specified key and IV. aesAlg = New RijndaelManaged() aesAlg.BlockSize = 128 aesAlg.KeySize = 128 aesAlg.Mode = CipherMode.ECB aesAlg.Padding = PaddingMode.None aesAlg.Key = Key aesAlg.IV = IV ' Create a decrytor to perform the stream transform. Dim encryptor As ICryptoTransform = aesAlg.CreateEncryptor(aesAlg.Key, aesAlg.IV) ' Create the streams used for encryption. msEncrypt = New MemoryStream() Using csEncrypt As New CryptoStream(msEncrypt, encryptor, CryptoStreamMode.Write) Using swEncrypt As New StreamWriter(csEncrypt) 'Write all data to the stream. swEncrypt.Write(plainText) End Using End Using Finally ' Clear the RijndaelManaged object. If Not (aesAlg Is Nothing) Then aesAlg.Clear() End If End Try ' Return the encrypted bytes from the memory stream. Return msEncrypt.ToArray() End Function Here's the actual code I am calling encryptBytesToBytes_AES() with: Private Sub btnEncrypt_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btnEncrypt.Click Dim bZeroKey As Byte() = {&H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0} PrintBytesToRTF(encryptBytesToBytes_AES(bZeroKey, bZeroKey, bZeroKey)) End Sub However, I get an exception thrown on swEncrypt.Write(plainText) stating that the 'Length of the data to encrypt is invalid.' However, I know that the size of my key, iv, and plaintext are 16 bytes == 128 bits == aesAlg.BlockSize. Why is it throwing this exception? Is it because the StreamWriter is trying to make a String (ostensibly with some encoding) and it doesn't like &H0 as a value?

    Read the article

  • Compiling and using NTL c++ library for Windows

    - by Martin Lauridsen
    Hi there, I have compiled the NTL inifite precision integer arithmetic library for c++, using Microsoft Visual Studio 2008. I did as explained, on this site, using the Visual Studio interface, rather than from the command prompt. Actually I would rather do it from the command prompt, but I was not sure how to. Anyhow, I got the library compiled, and I now want to compile a program using the library, from the command prompt. The program I am trying to compile, has been tested on a linux system, where I compile it with the following c++ -I/appl/htopopt/Linux_x86_64/NTL-5.4.2/include mpqs.cpp main.cpp -o main -L/appl/htopopt/Linux_x86_64/NTL-5.4.2/lib -lntl -L/appl/htopopt/Linux_x86_64/gmp-4.2.1/lib -lgmp -lm Nevermind the gmp stuff, I dont have that installed on Windows. It is purely an optional thing that will make the NTL run faster. Anyhow, this works fine on linux. Now on Windows I write the following cl /EHsc /I D:\Downloads\WinNTL-5_5_2\include mpqs.cpp main.cpp /link /LIBPATH:"D:\Documents\Visual Studio 2008\Projects\ntl\Debug" But this results in the following errors: mpqs.cpp mpqs.cpp(38) : error C2039: 'find_smooth_vals' : is not a member of 'QS' d:\desktop\qs\mpqs.h(12) : see declaration of 'QS' mpqs.cpp(41) : error C2065: 'M' : undeclared identifier mpqs.cpp(41) : error C2065: 'n' : undeclared identifier mpqs.cpp(42) : error C2065: 'sieve_table' : undeclared identifier mpqs.cpp(42) : error C2228: left of '.size' must have class/struct/union type is ''unknown-type'' mpqs.cpp(43) : error C2065: 'sieve_table' : undeclared identifier mpqs.cpp(44) : error C2065: 'qx_table' : undeclared identifier mpqs.cpp(44) : error C3861: 'test_smoothness': identifier not found mpqs.cpp(45) : error C2065: 'smooth_indices' : undeclared identifier mpqs.cpp(45) : error C2228: left of '.push_back' must have class/struct/union type is ''unknown-type'' main.cpp Generating Code... It is as if, my mpqs.h file is not included into the compilation process? Also I dont understand why it complains about .push_back() for a vector type? Help is much appreciated!

    Read the article

  • Java 2D Resize

    - by jon077
    I have some old Java 2D code I want to reuse, but was wondering, is this the best way to get the highest quality images? public static BufferedImage getScaled(BufferedImage imgSrc, Dimension dim) { // This code ensures that all the pixels in the image are loaded. Image scaled = imgSrc.getScaledInstance( dim.width, dim.height, Image.SCALE_SMOOTH); // This code ensures that all the pixels in the image are loaded. Image temp = new ImageIcon(scaled).getImage(); // Create the buffered image. BufferedImage bufferedImage = new BufferedImage(temp.getWidth(null), temp.getHeight(null), BufferedImage.TYPE_INT_RGB); // Copy image to buffered image. Graphics g = bufferedImage.createGraphics(); // Clear background and paint the image. g.setColor(Color.white); g.fillRect(0, 0, temp.getWidth(null),temp.getHeight(null)); g.drawImage(temp, 0, 0, null); g.dispose(); // j2d's image scaling quality is rather poor, especially when // scaling down an image to a much smaller size. We'll post filter // our images using a trick found at // http://blogs.cocoondev.org/mpo/archives/003584.html // to increase the perceived quality.... float origArea = imgSrc.getWidth() * imgSrc.getHeight(); float newArea = dim.width * dim.height; if (newArea <= (origArea / 2.)) { bufferedImage = blurImg(bufferedImage); } return bufferedImage; } public static BufferedImage blurImg(BufferedImage src) { // soften factor - increase to increase blur strength float softenFactor = 0.010f; // convolution kernel (blur) float[] softenArray = { 0, softenFactor, 0, softenFactor, 1-(softenFactor*4), softenFactor, 0, softenFactor, 0}; Kernel kernel = new Kernel(3, 3, softenArray); ConvolveOp cOp = new ConvolveOp(kernel, ConvolveOp.EDGE_NO_OP, null); return cOp.filter(src, null); }

    Read the article

  • assigning width to li

    - by badnaam
    I am using jquery ui tabs, which requires the tabs to be <li> elements, by default at least. I only need 2 tabs, but I am not able to size them so that they are both equal and take up 100% of available width of the ul. here is my code. <div id="intro_tabs" class="tabs"> <ul id="intro_nav"> <li> <h3><a href ="#tabs-1" class="null_link"><%= t('home.index.what_is_it') %></a></h3> </li> <li> <h3><a href ="#tabs-2" class="null_link"><%= t('home.index.how_works') %></a></h3> </li> </ul> <div id="tabs-1"> <%= simple_format t 'home.index.what_intro_details' %> </div> <div id="tabs-2"> <div id="intro_accordion"> <h3><a href="#">Users</a></h3> <div> <%= t 'home.index.how_intro_details_user' %> </div> <h3><a href="#">Merchants</a></h3> <div> <%= t 'home.index.how_intro_details_merchant' %> </div> </div> </div> </div> I have tried using css property width:50%, on both li's but it doesn't work. Thanks

    Read the article

  • jquery ajax form plugin submit multiple times to the server only when using IE6

    - by Dino
    I all. I have the following form used to temporarily upload a photo on a j2ee server and then crop it with imageAreaSelect plugin : <form name="formAvatarName" id="formAvatar" method="post" action="../admin/admin-avatar-upload" enctype="multipart/form-data"> <label>Upload a Picture of Yourself</label> <input type="file" name="upload" id="upload" size="20" /> <input type="button" id="formAvatarSubmit" value="formAvatar" onclick="invia()"/> </form> I am using jquery form plugin to do ajax submission, this is my last :) attempt : $('#formAvatar').unbind('submit').bind('submit', function() { alert('aho'); $(this).ajaxSubmit(options); return false; }); Only when tested with IE6 I can see that the sumbission to the server is done multiple times (first time I got the uploaded file, the other times the sumbmission seems empty and I got error). With IE7, IE8, FFOX, CHROME is working fine. Any Ideas? Many thank in advance!

    Read the article

  • Applets failing to load

    - by Roy Tang
    While testing our setup for user acceptance testing, we got some reports that java applets in our web application would occasionally fail to load. The envt where it was reported was WinXP/IE6, and there were no errors found in the java console. Obviously we'd like to avoid it. What sort of things should we be checking for here? On our local servers, everything seems fine. There's some turnaround time when sending questions to the on-site guy, so I'd look to cover as many possible causes as possible. Some more info: We have multiple applets, in the instance that they fail loading, all of them fail loading. The applet jar files vary in size from 2MB to 8MB. I'm told it seems more likely to happen if the applet isn't cached yet, i.e. if they've been able to load the applets once on a given machine, further runs on that machine go smoothly. I'm wondering if there's some sort of network transfer error when downloading the applets, but I don't know how to verify that. Any advise is welcome!

    Read the article

  • '$.fn' is null or not an object

    - by metal-gear-solid
    Problem 1 Error: Microsoft JScript runtime error: '$.fn' is null or not an object Error area: $.fn.apply=function(item,content,header){ $(".featureBox"+item).css('z-index', "1000"); $("img.featureBox" + item +"top").attr("src",basepath + "box-big-top.jpg"); $("img.featureBox" + item +"imgcut").attr("src",basepath + "box-big-img"+item+".jpg"); featureboxcont[item].attr("src",basepath + "box-big-cont.jpg"); $("img.featureBox" + item +"foot").attr("src",basepath + "box-big-bot2.jpg"); //$("#NoteModalDialog > #x-dlg-bd > #x-dlg-tab > #acc-ct") $("#box"+item+"headtext > .h2div > h2").text(header); $("#box"+item+"bottext").css({"top":"181px","width":"205px","font-size":"12px","color":"#ffffff","left":"10"}); $("#box"+item+"foottext").css({"top":footheight+"px","width":"215px","left":"20"}); $("#box"+item+"hidden").css({"display":"block"}); $("#box"+item+"bottext").text(content); $("#box"+item+"headtext > .h2div > h2").removeClass("sIFR-replaced"); callsIFR(); } Problem 2 Error : Microsoft JScript runtime error: 'null' is null or not an object Error area : $("#innerWrapper").addClass("js-version"); I'm also using protoype.js on page.

    Read the article

  • Calculate total batch upload transfer percent with limited information

    - by GONeale
    Hi there, I have a system which uploads to a server file by file and displays a progress bar on file upload progress, then underneath a second progress bar which I want to indicate percentage of batch complete across all files queued to upload. Information and algorithms I can work out are: Bytes Sent / Total Bytes To Send = First progress bar (eg. 512KB of 1024KB (50%)) That works fine. However supposing I have two other files left to upload, but both file sizes are unknown (as this is only known once the file is about to commence upload, at which point it is compressed and file size is determined) how would I go about making my third progress bar? I didn't think this would be possible as I would need "Total Bytes Sent" / "Total Bytes To Send", to replicate the logic of my first progress bar on a larger scale, however I did get a version working: "Current file number we are on" / "total number of files to send" returning the percentage through the batch, however obviously will not incrementally update and it's pretty crude. So on further thinking I thought if I could incorporate the current file % with this algorithm I could perhaps get the correct progress percentage of my batch's current point. I tried this algorithm, but alas to no such avail (sorry to any math heads, it's probably quite apparent why it won't work) ("Current file number we are on" / "total number of files to send") * ("Bytes Sent" / "Total Bytes To Send") For example I thought I was on the right track when testing with this example: 2/3 (2nd of 3rd file) = 66% (this is right so far) but then when I added * 0.20 (for indicating only 20% of 2nd file has uploaded) we went back to 13%. What I need is only a little over 33%! I did try the inverse at 0.80 and a (2/3 * (2/3 * 0.2)) Can this be done without knowing entire bytes in batch to upload? Please help! Thank you!

    Read the article

  • Looking for advice on importing large dataset in sqlite and Cocoa/Objective-C

    - by jluckyiv
    I have a fairly large hierarchical dataset I'm importing. The total size of the database after import is about 270MB in sqlite. My current method works, but I know I'm hogging memory as I do it. For instance, if I run with Zombies, my system freezes up (although it will execute just fine if I don't use that Instrument). I was hoping for some algorithm advice. I have three hierarchical tables comprising about 400,000 records. The highest level has about 30 records, the next has about 20,000, the last has the balance. Right now, I'm using nested for loops to import. I know I'm creating an unreasonably large object graph, but I'm also looking to serialize to JSON or XML because I want to break up the records into downloadable chunks for the end user to import a la carte. I have the code written to do the serialization, but I'm wondering if I can serialize the object graph if I only have pieces in memory. Here's pseudocode showing the basic process for sqlite import. I left out the unnecessary detail. [database open]; [database beginTransaction]; NSArray *firstLevels = [[FirstLevel fetchFromURL:url retain]; for (FirstLevel *firstLevel in firstLevels) { [firstLevel save]; int id1 = [firstLevel primaryKey]; NSArray *secondLevels = [[SecondLevel fetchFromURL:url] retain]; for (SecondLevel *secondLevel in secondLevels) { [secondLevel saveWithForeignKey:id1]; int id2 = [secondLevel primaryKey]; NSArray *thirdLevels = [[ThirdLevel fetchFromURL:url] retain]; for (ThirdLevel *thirdLevel in thirdLevels) { [thirdLevel saveWithForeignKey:id2]; } [database commit]; [database beginTransaction]; [thirdLevels release]; } [secondLevels release]; } [database commit]; [database release]; [firstLevels release];

    Read the article

  • jQuery background-position animation

    - by depi
    Hi guys, I've created an image which is basically a CSS sprite of 3 images together. It's size is 278x123 so they are basically 3 images of 278x41. What I am trying to do is to make an animation of that by changing the background position. I've tried many things, one of my not very working solution is the following: var $slogan = $('#header h2 span'); $slogan.css({backgroundPosition: '0px 0px'}); function slogan_animation() { if ($slogan.css('background-position') == '0px 0px') { $slogan.fadeIn('slow').css('background-position', '0px -41px').fadeOut('slow'); } else if ($slogan.css('background-position') == '0px -41px') { $slogan.fadeIn('slow').css('background-position', '0px -82px').fadeOut('slow'); } else if ($slogan.css('background-position') == '0px -82px') { $slogan.fadeIn('slow').css('background-position', '0px 0px').fadeOut('slow'); } } setInterval(slogan_animation, 2000); Any ideas how could I do that? Basically I just need to: - set my background position to "0px 0px", then move it to "0px -41px", then "0px -82px" and then loop it again from "0px 0px". It would be also great to have fadeIn() effect between those. Any ideas? Thank you.

    Read the article

  • HTTP crawler in Erlang

    - by ctp
    I'm coding on a simple HTTP crawler but I have an issue running the code at the bottom. I'm requesting 50 URLs and get the content of 20+ back. I've generated few files with 150kB size each to test the crawler. So I think the 20+ responses are limited by the bandwidth? BUT: how to tell the Erlang snippet not to quit until the last file is not fetched? The test data server is online, so plz try the code out and any hints are welcome :) -module(crawler). -define(BASE_URL, "http://46.4.117.69/"). -export([start/0, send_reqs/0, do_send_req/1]). start() -> ibrowse:start(), proc_lib:spawn(?MODULE, send_reqs, []). to_url(Id) -> ?BASE_URL ++ integer_to_list(Id). fetch_ids() -> lists:seq(1, 50). send_reqs() -> spawn_workers(fetch_ids()). spawn_workers(Ids) -> lists:foreach(fun do_spawn/1, Ids). do_spawn(Id) -> proc_lib:spawn_link(?MODULE, do_send_req, [Id]). do_send_req(Id) -> io:format("Requesting ID ~p ... ~n", [Id]), Result = (catch ibrowse:send_req(to_url(Id), [], get, [], [], 10000)), case Result of {ok, Status, _H, B} -> io:format("OK -- ID: ~2..0w -- Status: ~p -- Content length: ~p~n", [Id, Status, length(B)]); Err -> io:format("ERROR -- ID: ~p -- Error: ~p~n", [Id, Err]) end. That's the output: Requesting ID 1 ... Requesting ID 2 ... Requesting ID 3 ... Requesting ID 4 ... Requesting ID 5 ... Requesting ID 6 ... Requesting ID 7 ... Requesting ID 8 ... Requesting ID 9 ... Requesting ID 10 ... Requesting ID 11 ... Requesting ID 12 ... Requesting ID 13 ... Requesting ID 14 ... Requesting ID 15 ... Requesting ID 16 ... Requesting ID 17 ... Requesting ID 18 ... Requesting ID 19 ... Requesting ID 20 ... Requesting ID 21 ... Requesting ID 22 ... Requesting ID 23 ... Requesting ID 24 ... Requesting ID 25 ... Requesting ID 26 ... Requesting ID 27 ... Requesting ID 28 ... Requesting ID 29 ... Requesting ID 30 ... Requesting ID 31 ... Requesting ID 32 ... Requesting ID 33 ... Requesting ID 34 ... Requesting ID 35 ... Requesting ID 36 ... Requesting ID 37 ... Requesting ID 38 ... Requesting ID 39 ... Requesting ID 40 ... Requesting ID 41 ... Requesting ID 42 ... Requesting ID 43 ... Requesting ID 44 ... Requesting ID 45 ... Requesting ID 46 ... Requesting ID 47 ... Requesting ID 48 ... Requesting ID 49 ... Requesting ID 50 ... OK -- ID: 49 -- Status: "200" -- Content length: 150000 OK -- ID: 47 -- Status: "200" -- Content length: 150000 OK -- ID: 50 -- Status: "200" -- Content length: 150000 OK -- ID: 17 -- Status: "200" -- Content length: 150000 OK -- ID: 48 -- Status: "200" -- Content length: 150000 OK -- ID: 45 -- Status: "200" -- Content length: 150000 OK -- ID: 46 -- Status: "200" -- Content length: 150000 OK -- ID: 10 -- Status: "200" -- Content length: 150000 OK -- ID: 09 -- Status: "200" -- Content length: 150000 OK -- ID: 19 -- Status: "200" -- Content length: 150000 OK -- ID: 13 -- Status: "200" -- Content length: 150000 OK -- ID: 21 -- Status: "200" -- Content length: 150000 OK -- ID: 16 -- Status: "200" -- Content length: 150000 OK -- ID: 27 -- Status: "200" -- Content length: 150000 OK -- ID: 03 -- Status: "200" -- Content length: 150000 OK -- ID: 23 -- Status: "200" -- Content length: 150000 OK -- ID: 29 -- Status: "200" -- Content length: 150000 OK -- ID: 14 -- Status: "200" -- Content length: 150000 OK -- ID: 18 -- Status: "200" -- Content length: 150000 OK -- ID: 01 -- Status: "200" -- Content length: 150000 OK -- ID: 30 -- Status: "200" -- Content length: 150000 OK -- ID: 40 -- Status: "200" -- Content length: 150000 OK -- ID: 05 -- Status: "200" -- Content length: 150000 Update: thanks stemm for the hint with the wait_workers. I've combined your and mine code but same behaviour :( -module(crawler). -define(BASE_URL, "http://46.4.117.69/"). -export([start/0, send_reqs/0, do_send_req/2]). start() -> ibrowse:start(), proc_lib:spawn(?MODULE, send_reqs, []). to_url(Id) -> ?BASE_URL ++ integer_to_list(Id). fetch_ids() -> lists:seq(1, 50). send_reqs() -> spawn_workers(fetch_ids()). spawn_workers(Ids) -> %% collect reference to each worker Refs = [ do_spawn(Id) || Id <- Ids ], %% wait for response from each worker wait_workers(Refs). wait_workers(Refs) -> lists:foreach(fun receive_by_ref/1, Refs). receive_by_ref(Ref) -> %% receive message only from worker with specific reference receive {Ref, done} -> done end. do_spawn(Id) -> Ref = make_ref(), proc_lib:spawn_link(?MODULE, do_send_req, [Id, {self(), Ref}]), Ref. do_send_req(Id, {Pid, Ref}) -> io:format("Requesting ID ~p ... ~n", [Id]), Result = (catch ibrowse:send_req(to_url(Id), [], get, [], [], 10000)), case Result of {ok, Status, _H, B} -> io:format("OK -- ID: ~2..0w -- Status: ~p -- Content length: ~p~n", [Id, Status, length(B)]), %% send message that work is done Pid ! {Ref, done}; Err -> io:format("ERROR -- ID: ~p -- Error: ~p~n", [Id, Err]), %% repeat request if there was error while fetching a page, do_send_req(Id, {Pid, Ref}) %% or - if you don't want to repeat request, put there: %% Pid ! {Ref, done} end. Running the crawler forks fine for a handful of files, but then the code even doesnt fetch the entire files (file size each 150000 bytes) - he crawler fetches some files partially, see the following web server log :( 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /10 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /1 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /3 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /8 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /39 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /7 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /6 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /2 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /5 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /50 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /9 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /44 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /38 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /47 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /49 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /43 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /37 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /46 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /48 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /36 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /42 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /41 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /45 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /17 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /35 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /16 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /15 HTTP/1.1" 200 17020 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /21 HTTP/1.1" 200 120360 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /40 HTTP/1.1" 200 117600 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /34 HTTP/1.1" 200 60660 "-" "-" Any hints are welcome. I have no clue what's going wrong there :(

    Read the article

  • Data Mappers, Models and Images

    - by James
    Hi, I've seen and read plenty of blog posts and forum topics talking about and giving examples of Data Mapper / Model implementations in PHP, but I've not seen any that also deal with saving files/images. I'm currently working on a Zend Framework based project and I'm doing some image manipulation in the model (which is being passed a file path), and then I'm leaving it to the mapper to save that file to the appropriate location - is this common practise? But then, how do you deal with creating say 3 different size images from the one passed in? At the moment I have a "setImage($path_to_tmp_name)" which checks the image type, resizes and then saves back to the original filename. A call to "getImagePath()" then returns the current file path which the data mapper can use and then change with a call to "setImagePath($path)" once it's saved it to the appropriate location, say "/content/my_images". Does this sound practical to you? Also, how would you deal with getting the URL to that image? Do you see that as being something that the model should be providing? It seems to me like that model should worry about where the images are being stored or ultimately how they're accessed through a browser and so I'm inclined to put that in the ini file and just pass the URL prefix to the view through the controller. Does that sound reasonable? I'm using GD for image manipulation - not that that's of any relevance. UPDATE: I've been wondering if the image resizing should be done in the model at all. The model could require that it's provided a "main" image and a "thumb" image, both of certain dimensions. I've thought about creating a "getImageSpecs()" function in the model that would return something that defines the required sizes, then a separate image manipulation class could carry out the resizing and (perhaps in the controller?) and just pass the final paths in to the model using something like "setImagePaths($images)". Any thoughts much appreciated :) James.

    Read the article

  • "Priming" a whole database in MSSQL for first-hit speed

    - by David Spillett
    For a particular apps I have a set of queries that I run each time the database has been restarted for any reason (server reboot usually). These "prime" SQL Server's page cache with the common core working set of the data so that the app is not unusually slow the first time a user logs in afterwards. One instance of the app is running on an over-specced arrangement where the SQL box has more RAM than the size of the database (4Gb in the machine, the DB is under 1.5Gb currently and unlikely to grow too much relative to that in the near future). Is there a neat/easy way of telling SQL Server to go away and load everything into RAM? It could be done the hard way by having a script scan sysobjects & sysindexes and running SELECT * FROM <table> WITH(INDEX(<index_name>)) ORDER BY <index_fields> for every key and index found, which should cause every used page to be read at least once and so be in RAM, but is there a cleaner or more efficient way? All planned instances where the database server is stopped are out-of-normal-working-hours (all the users are at most one timezone away and unlike me none of them work at silly hours) so such a process (until complete) slowing down users more than the working set not being primed at all would is not an issue.

    Read the article

  • Varying performance of MSVC release exe

    - by Andrew
    Hello everyone, I am curious what could be the reason for highly varying performance of the same executable. Sometimes, I run it and it takes 20 seconds and sometimes it is 110. Source is compiled with MSVC in Release mode with standard options. The code is here: vector<double> Un; vector<double> Ucur; double *pUn, *pUcur; ... // time marching for (old_time=time-logfreq, time+=dt; time <= end_time; time+=dt) { for (i=1, j=Un.size()-1, pUn=&Un[1], pUcur=&Ucur[1]; i < j; ++i, ++pUn, ++pUcur) { *pUcur = (*pUn)*(1.0-0.5*alpha*( *(pUn+1) - *(pUn-1) )); } Ucur[0] = (Un[0])*(1.0-0.5*alpha*( Un[1] - Un[j] )); Ucur[j] = (Un[j])*(1.0-0.5*alpha*( Un[0] - Un[j-1] )); Un = Ucur; }

    Read the article

  • UIView animation only animating some of the things I ask it to

    - by Ben
    I have a series of (say) boxes on the screen in a row, all subviews of my main view. Each is a UIView. I want to shift them all left and have a new view also enter the screen from the right in lockstep. Here's what I'm doing: // First add a dummy view offscreen UIView * stagingView = /* make this view, which sets up its width/height */ CGRect frame = [stagingView frame]; frame.origin.x = /* just off the right side of the screen */; [stagingView setFrame:frame]; [self stagingView]; And then I set up animations in one block for all of my subviews (which includes the one I just added): [UIView beginAnimations:@"shiftLeft" context:NULL]; [UIView setAnimationCurve:UIViewAnimationCurveEaseInOut]; [UIView setAnimationDelegate:self]; [UIView setAnimationDidStopSelector:@selector(_animationDidStop:context:)]; [UIView setAnimationDuration:0.3]; for (UIView * view in [self subviews]) { CGRect frame = [view frame]; frame.origin.x -= (frame.size.width + viewPadding); [view setFrame:frame]; } [UIView commitAnimations]; Here's what I expect: The (three) views already on screen get shifted left and the newly staged view marches in from the right at the same time. Here's what happens: The newly staged view animates in exactly as expected, and the views already on the screen do not appear to move at all! (Or possibly they jump without animation to their end locations). And! If I comment out the whole business of creating the new subview offscreen... the ones onscreen do animate correctly! Huh? (Thanks!)

    Read the article

  • OutOfMemory during paging

    - by Tony
    Hi I am using ObjectDataSource, ListView, CustomPaging If the total number of rows is too big, I got OutOfMemory exception, it seems that it caused by some array, I don't get it, because total number of rows should never make any array to be filled with elements, the page size do!! This is the logger. ****EXCEPTION # 3 : 4/30/2010 9:43:07 PM System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. --- System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown. at System.Web.UI.WebControls.ListView.CreateChildControls() at System.Web.UI.Control.EnsureChildControls() at System.Web.UI.WebControls.ListView.get_Controls() at System.Web.UI.Control.LoadChildViewStateByIndex(ArrayList childState) at System.Web.UI.Control.LoadViewStateRecursive(Object savedState) at System.Web.UI.Control.LoadChildViewStateByIndex(ArrayList childState) at System.Web.UI.Control.LoadViewStateRecursive(Object savedState) at System.Web.UI.Control.LoadChildViewStateByIndex(ArrayList childState) at System.Web.UI.Control.LoadViewStateRecursive(Object savedState) at System.Web.UI.Control.LoadChildViewStateByIndex(ArrayList childState) at System.Web.UI.Control.LoadViewStateRecursive(Object savedState) at System.Web.UI.Control.LoadChildViewStateByIndex(ArrayList childState) at System.Web.UI.Control.LoadViewStateRecursive(Object savedState) at System.Web.UI.Control.LoadChildViewStateByIndex(ArrayList childState) at System.Web.UI.Control.LoadViewStateRecursive(Object savedState) at System.Web.UI.Page.LoadAllState() at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) --- End of inner exception stack trace --- at System.Web.UI.Page.HandleError(Exception e) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest() at System.Web.UI.Page.ProcessRequestWithNoAssert(HttpContext context) at System.Web.UI.Page.ProcessRequest(HttpContext context) at ASP.default_aspx.ProcessRequest(HttpContext context) in c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\flickrdemo\15752207\c63ea96c\App_Web__8yxn9sb.0.cs:line 0 at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)

    Read the article

  • Wanted to know in detail about how shared libraries work vis-a-vis static library.

    - by goldenmean
    Hello, I am working on creating and linking shared library (.so). While working with them, many questions popped up which i could not find satisying answers when i searched for them, hence putting them here. The questions about shared libraries i have are: 1.) How is shared library different than static library? What are the Key differences in way they are created, they execute? 2.) In case of a shared library at what point are the addresses where a particular function in shared library will be loaded and run from, given? Who gives those functions is load/run addresses? 3.) Will an application linked against shared library be slower in execution as compared to that which is linked with a static library? 4.) Will application executable size differ in these two cases? 5.) Can one do source level debugging of by stepping into functions defined inside a shared library? Is any thing extra needed to make these functions visible to the application? 6.) What are pros and cons in using either kind of library? Thanks. -AD

    Read the article

  • Why does adding Crossover to my Genetic Algorithm gives me worse results?

    - by MahlerFive
    I have implemented a Genetic Algorithm to solve the Traveling Salesman Problem (TSP). When I use only mutation, I find better solutions than when I add in crossover. I know that normal crossover methods do not work for TSP, so I implemented both the Ordered Crossover and the PMX Crossover methods, and both suffer from bad results. Here are the other parameters I'm using: Mutation: Single Swap Mutation or Inverted Subsequence Mutation (as described by Tiendil here) with mutation rates tested between 1% and 25%. Selection: Roulette Wheel Selection Fitness function: 1 / distance of tour Population size: Tested 100, 200, 500, I also run the GA 5 times so that I have a variety of starting populations. Stop Condition: 2500 generations With the same dataset of 26 points, I usually get results of about 500-600 distance using purely mutation with high mutation rates. When adding crossover my results are usually in the 800 distance range. The other confusing thing is that I have also implemented a very simple Hill-Climbing algorithm to solve the problem and when I run that 1000 times (faster than running the GA 5 times) I get results around 410-450 distance, and I would expect to get better results using a GA. Any ideas as to why my GA performing worse when I add crossover? And why is it performing much worse than a simple Hill-Climb algorithm which should get stuck on local maxima as it has no way of exploring once it finds a local max?

    Read the article

  • WCF, IIS6.0 (413) Request Entity Too Large.

    - by Andrew Kalashnikov
    Hello, guys. I've got annoyed problem. I've got WCF service(basicHttpBinding with Transport security Https). This service implements contract which consists 2 methods. LoadData. GetData. GetData works OK!. My client received pachage ~2Mb size without problems. All work correctly. But when I try load data by bool LoadData(Stream data); - signature of method I'll get (413) Request Entity Too Large. Stack Trace: Server stack trace: ? ServiceModel.Channels.HttpChannelUtilities.ValidateRequestReplyResponse(HttpWebRequest request, HttpWebResponse response, HttpChannelFactory factory, WebException responseException, ChannelBinding channelBinding) System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelRequest.WaitForReply(TimeSpan timeout) System.ServiceModel.Channels.RequestChannel.Request(Message message, TimeSpan timeout) System.ServiceModel.Dispatcher.RequestChannelBinder.Request(Message message, TimeSpan timeout) I try this http://blogs.msdn.com/jiruss/archive/2007/04/13/http-413-request-entity-too-large-can-t-upload-large-files-using-iis6.aspx. But it doesn't work! My server is 2003 with IIS6.0. Please help.

    Read the article

  • How to count term frequency for set of documents?

    - by ManBugra
    i have a Lucene-Index with following documents: doc1 := { caldari, jita, shield, planet } doc2 := { gallente, dodixie, armor, planet } doc3 := { amarr, laser, armor, planet } doc4 := { minmatar, rens, space } doc5 := { jove, space, secret, planet } so these 5 documents use 14 different terms: [ caldari, jita, shield, planet, gallente, dodixie, armor, amarr, laser, minmatar, rens, jove, space, secret ] the frequency of each term: [ 1, 1, 1, 4, 1, 1, 2, 1, 1, 1, 1, 1, 2, 1 ] for easy reading: [ caldari:1, jita:1, shield:1, planet:4, gallente:1, dodixie:1, armor:2, amarr:1, laser:1, minmatar:1, rens:1, jove:1, space:2, secret:1 ] What i do want to know now is, how to obtain the term frequency vector for a set of documents? for example: Set<Documents> docs := [ doc2, doc3 ] termFrequencies = magicFunction(docs); System.out.pring( termFrequencies ); would result in the ouput: [ caldari:0, jita:0, shield:0, planet:2, gallente:1, dodixie:1, armor:2, amarr:1, laser:1, minmatar:0, rens:0, jove:0, space:0, secret:0 ] remove all zeros: [ planet:2, gallente:1, dodixie:1, armor:2, amarr:1, laser:1 ] Notice, that the result vetor contains only the term frequencies of the set of documents. NOT the overall frequencies of the whole index! The term 'planet' is present 4 times in the whole index but the source set of documents only contains it 2 times. A naive implementation would be to just iterate over all documents in the docs set, create a map and count each term. But i need a solution that would also work with a document set size of 100.000 or 500.000. Is there a feature in Lucene i can use to obtain this term vector? If there is no such feature, how would a data structure look like someone can create at index time to obtain such a term vector easily and fast? I'm not that Lucene expert so i'am sorry if the solution is obvious or trivial.

    Read the article

  • silverlight 3: long running wcf call triggers 401.1 (access denied)

    - by sympatric greg
    I have a wcf service consumed by a silverlight 3 control. The Silverlight client uses a basicHttpBindinging that is constructed at runtime from the control's initialization parameters like this: public static T GetServiceClient<T>(string serviceURL) { BasicHttpBinding binding = new BasicHttpBinding(Application.Current.Host.Source.Scheme.Equals("https", StringComparison.InvariantCultureIgnoreCase) ? BasicHttpSecurityMode.Transport : BasicHttpSecurityMode.None); binding.MaxReceivedMessageSize = int.MaxValue; binding.MaxBufferSize = int.MaxValue; binding.Security.Mode = BasicHttpSecurityMode.TransportCredentialOnly; return (T)Activator.CreateInstance(typeof(T), new object[] { binding, new EndpointAddress(serviceURL)}); } The Service implements windows security. Calls were returning as expected until the result set increased to several thousand rows at which time HTTP 401.1 errors were received. The Service's HttpBinding defines closeTime, openTimeout, receiveTimeout and sendTimeOut of 10 minutes. If I limit the size of the resultset the call suceeds. Additional Observations from Fiddler: When Method2 is modified to return a smaller resultset (and avoid the problem), control initialization consists of 4 calls: Service1/Method1 -- result:401 Service1/Method1 -- result:401 (this time header includes element "Authorization: Negotiate TlRMTV..." Service1/Method1 -- result:200 Service1/Method2 -- result:200 (1.25 seconds) When Method2 is configured to return the larger resultset we get: Service1/Method1 -- result:401 Service1/Method1 -- result:401 (this time header includes element "Authorization: Negotiate TlRMTV..." Service1/Method1 -- result:200 Service1/Method2 -- result:401.1 (7.5 seconds) Service1/Method2 -- result:401.1 (15ms) Service1/Method2 -- result:401.1 (7.5 seconds)

    Read the article

< Previous Page | 591 592 593 594 595 596 597 598 599 600 601 602  | Next Page >