Search Results

Search found 7179 results on 288 pages for 'slow logon'.

Page 259/288 | < Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >

  • Python : How do you find the CPU consumption for a piece of code?

    - by Yugal Jindle
    Background: I have a django application, it works and responds pretty well on low load, but on high load like 100 users/sec, it consumes 100% CPU and then due to lack of CPU slows down. Problem : Profiling the application gives me time taken by functions. This time increases on high load. Time consumed may be due to complex calculation or for waiting for CPU. so, how to find the CPU cycles consumed by a piece of code ? Since, reducing the CPU consumption will increase the response time. I might have written extremely efficient code and need to add more CPU power OR I might have some stupid code taking the CPU and causing the slow down ? Any help is appreciated ! Update: I am using Jmeter to profile my webapp, it gives me a throughput of 2 requests/sec. [ 100 users] I get a average time of 36 seconds on 100 request vs 1.25 sec time on 1 request. More Info Configuration Nginx + Uwsgi with 4 workers No database used, using a responses from a REST API On 1st hit the response of REST API gets cached, therefore doesn't makes a difference. Using ujson for json parsing. Curious to Know: Python-Django is used by so many orgs for so many big sites, then there must be some high end Debug / Memory-CPU analysis tools. All those I found were casual snippets of code that perform profiling.

    Read the article

  • printing out dictionnaires

    - by kyril
    I have a rather specific question: I want to print out characters at a specific place using the \033[ syntax. This is what the code below should do: (the dict cells has the same keys as coords but with either '*' or '-' as value.) coords = {'x'+str(x)+'y'+str(y) : (x,y) for x,y, in itertools.product(range(60), range(20))} for key, value in coords.items(): char = cells[key] x,y = value HORIZ=str(x) VERT=str(y) char = str(char) print('\033['+VERT+';'+HORIZ+'f'+char) However, I noticed that if I put this into a infinite while loop, it does not always prints the same characters at the same position. There are only slight changes, but it deletes some and puts them back in after some loops. I already tried it with lists, and there it seems to behave just fine, so I tend to think it has something todo with the dict, but I can not figure out what it could be. You can see the Problem in a console here: SharedConsole.I am happy for every tip on this matter. On a related topic: After the printing, some changes should be made at the values of the cells dict, but for reason unknown to me, the only the first two rules are executed and the rest is ignored. The rules should test how many neighbours (which is in population) are around the cell and apply the according rule. In my implemention of this I have some kind of weird tumor growth (which should not happen, as if there more than three around they should the cell should die) (see FreakingTumor): if cells_copy [coord] == '-': if population == 3: cells [coord] = '*' if cells_copy [coord] == '*': if population > 3: cells [coord] = '-' elif population <= 1: cells [coord] = '-' elif population == 2 or 3: cells [coord] = '*' I checked the population variable several times, so I am quite sure that is not the matter. I am sorry for the slow consoles. Thanks in advance! Kyril

    Read the article

  • [PHP/MySQL] How to create text diff web app

    - by Adam Kiss
    Hello, idea I would like to create a little app for myself to store ideas (the thing is - I want it to do MY WAY) database I'm thinking going simple: id - unique id of revision in database text_id - identification number of text rev_id - number of revision flags - various purposes - expl. later title - self expl. desc - description text - self expl . flags - if I (i.e.) add flag rb;65, instead of storing whole text, I just said, that whenever I ask for latest revision, I go again in DB and check revision 65 Question: Is this setup the best? Is it better to store the diff, or whole text (i know, place is cheap...)? Does that revision flag make sense (wouldn't it be better to just copy text - more disk space, but less db and php processing. php I'm thinking, that I'll go with PEAR here. Although main point is to open-edit-save, possiblity to view revisions can't be that hard to program and can be life-saver in certain situations (good ideas got deleted, saving wrong version, etc...). However, I've never used PEAR in a long-time or full-project relationship, however, brief encounters in my previous experience left rather bad feeling - as I remember, it was too difficult to implement, slow and humongous to play with, so I don't know, if there's anything better. why? Although there are bazillions of various time/project/idea management tools, everything lacks something for me, whether it's sharing with users, syncing on more PCs, time-tracking, project management... And I believe, that this text diff webapp will be for internal use with various different tools later. So if you know any good and nice-UI-having project management app with support for text-heavy usage, just let me know, so I'll save my time for something better than redesigning the weel.

    Read the article

  • Optimising ruby regexp -- lots of match groups

    - by Farcaller
    I'm working on a ruby baser lexer. To improve performance, I joined up all tokens' regexps into one big regexp with match group names. The resulting regexp looks like: /\A(?<__anonymous_-1038694222803470993>(?-mix:\n+))|\A(?<__anonymous_-1394418499721420065>(?-mix:\/\/[\A\n]*))|\A(?<__anonymous_3077187815313752157>(?-mix:include\s+"[\A"]+"))|\A(?<LET>(?-mix:let\s))|\A(?<IN>(?-mix:in\s))|\A(?<CLASS>(?-mix:class\s))|\A(?<DEF>(?-mix:def\s))|\A(?<DEFM>(?-mix:defm\s))|\A(?<MULTICLASS>(?-mix:multiclass\s))|\A(?<FUNCNAME>(?-mix:![a-zA-Z_][a-zA-Z0-9_]*))|\A(?<ID>(?-mix:[a-zA-Z_][a-zA-Z0-9_]*))|\A(?<STRING>(?-mix:"[\A"]*"))|\A(?<NUMBER>(?-mix:[0-9]+))/ I'm matching it to my string producing a MatchData where exactly one token is parsed: bigregex =~ "\n ... garbage" puts $~.inspect Which outputs #<MatchData "\n" __anonymous_-1038694222803470993:"\n" __anonymous_-1394418499721420065:nil __anonymous_3077187815313752157:nil LET:nil IN:nil CLASS:nil DEF:nil DEFM:nil MULTICLASS:nil FUNCNAME:nil ID:nil STRING:nil NUMBER:nil> So, the regex actually matched the "\n" part. Now, I need to figure the match group where it belongs (it's clearly visible from #inspect output that it's _anonymous-1038694222803470993, but I need to get it programmatically). I could not find any option other than iterating over #names: m.names.each do |n| if m[n] type = n.to_sym resolved_type = (n.start_with?('__anonymous_') ? nil : type) val = m[n] break end end which verifies that the match group did have a match. The problem here is that it's slow (I spend about 10% of time in the loop; also 8% grabbing the @input[@pos..-1] to make sure that \A works as expected to match start of string (I do not discard input, just shift the @pos in it). You can check the full code at GH repo. Any ideas on how to make it at least a bit faster? Is there any option to figure the "successful" match group easier?

    Read the article

  • jquery pagination plugin doesn't quite work right for me...

    - by Pandiya Chendur
    My page has the following jquery statements, <script type="text/javascript"> $(document).ready(function() { getRecordspage(1, 5); }); </script> It works fine but how to add callback function to my pagination( callback://)? function getRecordspage(curPage, pagSize) { $.ajax({ type: "POST", url: "Default.aspx/GetRecords", data: "{'currentPage':" + curPage + ",'pagesize':" + pagSize + "}", contentType: "application/json; charset=utf-8", dataType: "json", success: function(jsonObj) { var strarr = jsonObj.d.split('##'); var jsob = jQuery.parseJSON(strarr[0]); var divs = ''; $.each(jsob.Table, function(i, employee) { divs += '<div class="resultsdiv"><br /><span class="resultName">' + employee.Emp_Name + '</span><span class="resultfields" style="padding-left:100px;">Category&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.Desig_Name + '</span><br /><br /><span id="SalaryBasis" class="resultfields">Salary Basis&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.SalaryBasis + '</span><span class="resultfields" style="padding-left:25px;">Salary&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.FixedSalary + '</span><span style="font-size:110%;font-weight:bolder;padding-left:25px;">Address&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.Address + '</span></div>'; }); $("#ResultsDiv").append(divs).show('slow'); $(".pager").pagination(strarr[1], { // callback function call to this method current_page: curPage-1, items_per_page: '5', num_display_entries : '5', next_text: 'Next', prev_text: 'Prev', num_edge_entries: '1' }); } }); } On the initial page load i get all divs with page numbers generated but when i click the next page number nothing happens because callback is not configured....Because my callback function has to call the same method with the current anchor tag which is clicked... Any suggestion how to get this done.......

    Read the article

  • Find all A^x in a given range

    - by Austin Henley
    I need to find all monomials in the form AX that when evaluated falls within a range from m to n. It is safe to say that the base A is greater than 1, the power X is greater than 2, and only integers need to be used. For example, in the range 50 to 100, the solutions would be: 2^6 3^4 4^3 My first attempt to solve this was to brute force all combinations of A and X that make "sense." However this becomes too slow when used for very large numbers in a big range since these solutions are used in part of much more intensive processing. Here is the code: def monoSearch(min, max): base = 2 power = 3 while 1: while base**power < max: if base**power > min: print "Found " + repr(base) + "^" + repr(power) + " = " + repr(base**power) power = power + 1 base = base + 1 power = 3 if base**power > max: break I could remove one base**power by saving the value in a temporary variable but I don't think that would make a drastic effect. I also wondered if using logarithms would be better or if there was a closed form expression for this. I am open to any optimizations or alternatives to finding the solutions.

    Read the article

  • Need help optimizing this Django aggregate query

    - by Chris Lawlor
    I have the following model class Plugin(models.Model): name = models.CharField(max_length=50) # more fields which represents a plugin that can be downloaded from my site. To track downloads, I have class Download(models.Model): plugin = models.ForiegnKey(Plugin) timestamp = models.DateTimeField(auto_now=True) So to build a view showing plugins sorted by downloads, I have the following query: # pbd is plugins by download - commented here to prevent scrolling pbd = Plugin.objects.annotate(dl_total=Count('download')).order_by('-dl_total') Which works, but is very slow. With only 1,000 plugins, the avg. response is 3.6 - 3.9 seconds (devserver with local PostgreSQL db), where a similar view with a much simpler query (sorting by plugin release date) takes 160 ms or so. I'm looking for suggestions on how to optimize this query. I'd really prefer that the query return Plugin objects (as opposed to using values) since I'm sharing the same template for the other views (Plugins by rating, Plugins by release date, etc.), so the template is expecting Plugin objects - plus I'm not sure how I would get things like the absolute_url without a reference to the plugin object. Or, is my whole approach doomed to failure? Is there a better way to track downloads? I ultimately want to provide users some nice download statistics for the plugins they've uploaded - like downloads per day/week/month. Will I have to calculate and cache Downloads at some point? EDIT: In my test dataset, there are somewhere between 10-20 Download instances per Plugin - in production I expect this number would be much higher for many of the plugins.

    Read the article

  • SQL Server insert performance with and without primary key

    - by Eric
    Summary: I have a table populated via the following: insert into the_table (...) select ... from some_other_table Running the above query with no primary key on the_table is ~15x faster than running it with a primary key, and I don't understand why. The details: I think this is best explained through code examples. I have a table: create table the_table ( a int not null, b smallint not null, c tinyint not null ); If I add a primary key, this insert query is terribly slow: alter table the_table add constraint PK_the_table primary key(a, b); -- Inserting ~880,000 rows insert into the_table (a,b,c) select a,b,c from some_view; Without the primary key, the same insert query is about 15x faster. However, after populating the_table without a primary key, I can add the primary key constraint and that only takes a few seconds. This one really makes no sense to me. More info: The estimated execution plan shows 0% total query time spent on the clustered index insert SQL Server 2008 R2 Developer edition, 10.50.1600 Any ideas?

    Read the article

  • Read large amount of data from file in Java

    - by Crozin
    Hello I've got text file that contains 1 000 002 numbers in following formation: 123 456 1 2 3 4 5 6 .... 999999 100000 Now I need to read that data and allocate it to int variables (the very first two numbers) and all the rest (1 000 000 numbers) to an array int[]. It's not a hard task, but - it's horrible slow. My first attempt was java.util.Scanner: Scanner stdin = new Scanner(new File("./path")); int n = stdin.nextInt(); int t = stdin.nextInt(); int array[] = new array[n]; for (int i = 0; i < n; i++) { array[i] = stdin.nextInt(); } It works as excepted but it takes about 7500 ms to execute. I need to fetch that data in up to several hundred of milliseconds. Then I tried java.io.BufferedReader: Using BufferedReader.readLine() and String.split() I got the same results in about 1700 ms, but it's still too many. How can I read that amount of data in less that 1 second? The final result should be equal to: int n = 123; int t = 456; int array[] = { 1, 2, 3, 4, ..., 999999, 100000 };

    Read the article

  • Read header data from files on remote server

    - by rejeep
    Hi! I'm working on a project right now where I need to read header data from files on remote servers. I'm talking about many and large files so I cant read whole files, but just the header data I need. The only solution I have is to mount the remote server with fuse and then read the header from the files as if they where on my local computer. I've tried it and it works. But it has some drawbacks. Specially with FTP: Really slow (FTP is compared to SSH with curlftpfs). From same server, with SSH 90 files was read in 18 seconds. And with FTP 10 files in 39 seconds. Not dependable. Sometimes the mountpoint will not be unmounted. If the server is active and a passive mounting is done. That mountpoint and the parent folder gets locked in about 3 minutes. Does timeout, even when there's data transfer going (guess this is the FTP-protocol and not curlftpfs). Fuse is a solution, but I don't like it very much because I don't feel that I can trust it. So my question is basically if there's any other solutions to the problem. Language is preferably Ruby, but any other will work if Ruby does not support the solution. Thanks!

    Read the article

  • jQuery : how to apply effect on a child element

    - by Tristan
    Hello, instead of re-writting the same function, i want to optimise my code : <div class="header"> <h3>How to use the widget</h3> <span id="idwidget" ></span> </div> <div class="content" id="widget"> the JS : <script type="text/javascript"> $(document).ready(function() { var showText="Show"; var hideText="Hide"; $("#idwidget").before("<a href='#' class='button' id='toggle_link'>"+showText+"</a>"); $('#widget').hide(); $('a#toggle_link').click(function() { if ($('a#toggle_link').text()==showText) { $('a#toggle_link').text(hideText); } else { $('a#toggle_link').text(showText); } $('#widget').toggle('slow'); return false; }); }); This is working just with the div which is called widget and the button called idwidget. But on this page i have also : <div class="header"> <h3>How to eat APPLES</h3> <span id="IDsomethingelse" ></span> </div> <div class="content" id="somethingelse"> And i want it to be compatible with the code. I heard about children attribute, do you have an idea how to do that please ? Thank you

    Read the article

  • Effective simulation of compound poisson process in Matlab

    - by Henrik
    I need to simulate a huge bunch of compound poisson processes in Matlab on a very fine grid so I am looking to do it most effectively. I need to do a lot of simulations on the same random numbers but with parameters changing so it is practical to draw the uniforms and normals beforehand even though it means i have to draw a lot more than i will probably need and won't matter much because it will only need to be done once compared to in the order 500*n repl times the actual compound process generation. My method is the following: Let T be for how long i need to simulate and N the grid points, then my grid is: t=linspace(1,T,N); Let nrepl be the number of processes i need then I simulate P=poissrnd(lambda,nrepl,1); % Number of jumps for each replication U=(T-1)*rand(10000,nrepl)+1; % Set of uniforms on (1,T) for jump times N=randn(10000,nrepl); % Set of normals for jump size Then for replication j: Poiss=P(j); % Jumps for replication Uni=U(1:Poiss,j);% Jump times Norm=mu+sigma*N(1:Poiss,j);% Jump sizes Then this I guess is where I need your advice, I use this one-liner but it seems very slow: CPP_norm=sum(bsxfun(@times,bsxfun(@gt,t,Uni),Norm),1); In the inner for each jump it creates a series of same length as t with 0 until jump and then 1 after, multiplying this will create a grid with zeroes until jump has arrived and then the jump size and finally adding all these will produce the entire jump process on the grid. How can this be done more effectively? Thank you very much.

    Read the article

  • PHP shell_exec() - Run directly, or perform a cron (bash/php) and include MySQL layer?

    - by Jimbo
    Sorry if the title is vague - I wasn't quite sure how to word it! What I'm Doing I'm running a Linux command to output data into a variable, parse the data, and output it as an array. Array values will be displayed on a page using PHP, and this PHP page output is requested via AJAX every 10 seconds so, in effect, the data will be retrieved and displayed/updated every 10 seconds. There could be as many as 10,000 characters being parsed on every request, although this is usually much lower. Alternative Idea I want to know if there is a better* alternative method of retrieving this data every 10 seconds, as multiple users (<10) will be having this command executed automatically for them. A cronjob running on the server could execute either bash or php (which is faster?) to grab the data and store it in a MySQL database. Then, any AJAX calls to the PHP output would return values in the MySQL database rather than making a direct call to execute server code every 10 seconds. Why? I know there are security concerns with running execs directly from PHP, and (I hope this isn't micro-optimisation) I'm worried about CPU usage on the server. The server is running a sempron processor. Yes, they do still exist. Having this only execute when the user is on the page (idea #1) means that the server isn't running code that doesn't need to be run. However, is this slow and insecure? Just in case the type of linux command may be of assistance in determining it's efficiency: shell_exec("transmission-remote $host:$port --auth $username:$password -l"); I'm hoping that there are differences in efficiency and level of security with the two methods I have outlined above, and that this isn't just micro-micro-optimisation. If there are alternative methods that are better*, I'd love to learn about these! :)

    Read the article

  • Please help me with a Power shell Script which rearranges Paths.

    - by Hamish Grubijan
    Hi, I have both Sybase and MSFT SQL Servers installed. There is a time when Sybase interferes with MS SQL because they have they have some overlapping commands. So, I need two scripts: A) When runs, script A backs up the current path, grabs all paths that contain sybase or SYBASE or SyBASE (you get the point) in them and move them all at the very end of the path, while preserving the order. B) When it runs, script B restores the path from back-up. Both script a and script b should affect the path immediately. So, if a.bat that calls patha.ps1, pathb.ps1 looks like so: @REM Old path here call patha.ps1 @REM At this point the effective path should be different. call pathb.ps1 @REM Effective old path again Please let me know if this does not make sense. I am not sure if call command is the best one to use. I have never used P.S. before. I can try to formulate the same thing in Python (I know S.O. users tend to ask for "What have you tried so far"). Well, at this point I am VERY slow at writing anything in Power Shell language. Please help.

    Read the article

  • Quickly generate junk data of certain size in Javascript

    - by user1357607
    I am writing an upload speed test in Javascript. I am using Jquery (and Ajax) to send chunks of data to a server in order to time how long it takes to get a response. This should, in theory give an estimation, of the upload speed. Of course to cater for different bandwidths of the user I sequentially upload larger and larger amounts of junk data until a threshold duration is reached. Currently I generate the junk data using the following function, however, it is very slow when generation megabytes of data. function generate_random_data(size){ var chars = "abcdefghijklmnopqrstuvwxyz"; var random_data = ""; for (var i = 0; i < size; i++){ var random_num = Math.floor(Math.random() * char.length); random_data = random_data + chars.substring(random_num,random_num+1); } return random_data; Really all I am doing is generating a chunk of bytes to send to the server, however, this is the only way I could find out how in Javascript. Any help would be appreciated.

    Read the article

  • Cannot locate record in delphi ADO query

    - by Danatela
    I can't locate any record in TADOQuery using PK. First, I was trying to use standard Locate method: PPUQuery.Locate('ID', SpPlansQuery['PPONREC'], []); It always returns False, but manual search (passing the whole query matching ID with given PPONREC which is really slow) finds the desired row. I tried using loPartialKey and switched CursorLocation of query to clUseServer, but it didn't help. Next, I tried to filter my PPUQuery: PPUQuery.Filter := 'ID = ' + VarToStr(SpPlansQuery['PPONREC']); PPUQuery.Filtered := True; PPUQuery.First; But after that the PPUQuery.Eof is True and PPUQuery.RecordCount equals 0. Underlying database is Oracle 9 and the ID is of type INTEGER and is PK of table TPORDER_CMK. PPUQuery.SQL is: SELECT tp.*, la.*, lm.*, ld.*, ld1.*, to_cmk.* FROM ppu_plan.tporder_cmk tp JOIN PPU_PLAN.LARTICLES la ON TP.ARTICLE = LA.ID JOIN PPU_PLAN.LMATERIAL lm ON TP.MATERIAL = lm.id JOIN PPU_PLAN.LCADEP ld ON TP.CADEP = LD.ID JOIN PPU_PLAN.LCADEP ld1 ON TP.PRODUCER = LD1.ID JOIN PPU_PLAN.TORDER_CMK to_cmk ON TP.order_id=TO_cmk.ID WHERE TP.PLAN_ID = :pplan_id What should I try next and how to solve this problem?

    Read the article

  • Search for values in nested array

    - by dardub
    I have an array as follows array(2) { ["operator"] => array(2) { ["qty"] => int(2) ["id"] => int(251) } ["accessory209"] => array(2) { ["qty"] => int(1) ["id"] => int(209) } ["accessory211"] => array(2) { ["qty"] => int(1) ["id"] => int(211) } } I'm trying to find a way to verify an id value exists within the array and return bool. I'm trying to figure out a quick way that doesn't require creating a loop. Using the in_array function did not work, and I also read that it is quite slow. In the php manual someone recommended using flip_array() and then isset(), but I can't get it to work for a 2-d array. doing something like if($array['accessory']['id'] == 211) would also work for me, but I need to match all keys containing accessory -- not sure how to do that Anyways, I'm spinning in circles, and could use some help. This seems like it should be easy. Thanks.

    Read the article

  • Optimizing landing pages

    - by Oleg Shaldybin
    In my current project (Rails 2.3) we have a collection of 1.2 million keywords, and each of them is associated with a landing page, which is effectively a search results page for a given keywords. Each of those pages is pretty complicated, so it can take a long time to generate (up to 2 seconds with a moderate load, even longer during traffic spikes, with current hardware). The problem is that 99.9% of visits to those pages are new visits (via search engines), so it doesn't help a lot to cache it on the first visit: it will still be slow for that visit, and the next visit could be in several weeks. I'd really like to make those pages faster, but I don't have too many ideas on how to do it. A couple of things that come to mind: build a cache for all keywords beforehand (with a very long TTL, a month or so). However, building and maintaing this cache can be a real pain, and the search results on the page might be outdated, or even no longer accessible; given the volatile nature of this data, don't try to cache anything at all, and just try to scale out to keep up with traffic. I'd really appreciate any feedback on this problem.

    Read the article

  • Hide form if javascript disabled

    - by Kero
    I need to check on disabling JavaScript if the user disabled JavaScript from browser or firewall or any other place he will never show the form. I have lots of search and solutions, but unfortunately didn't got the right one. - Using style with no-script tag: This one could be broke with removing style... <noscript> <style type="text/css"> .HideClass { display:none; } </style> </noscript> The past code will work just fine but there is lots of problems in no-script tag as here Beside that i don't want to redirect user with no-script tag too...Beside that i can quickly stop loading the page to broke this meta or disable Meta tag from IE: <meta http-equiv="refresh" content="0; URL=Frm_JavaScriptDisable.aspx" /> Another way to redirect user with JavaScript but this will work let's say for 99% of users and this one isn't lovely way and will slow down the website... window.location="http://www.location.com/page.aspx"; Is there is any other ideas or suggestions to secure working with JavaScript...and prevent user from entering the website or see my form except when JavaScript enabled...

    Read the article

  • Best method to search heriarachal data

    - by WDuffy
    I'm looking at building a facility which allows querying for data with hierarchical filtering. I have a few ideas how I'm going to go about it but was wondering if there are any recommendations or suggestions that might be more efficient. As an example imagine that a user is searching for a job. The job areas would be as follows. 1: Scotland 2: --- West Central 3: ------ Glasgow 4: ------ Etc 5: --- North East 6: ------ Ayrshire 7: ------ Etc A user can search specific (ie Glasgow) or in a larger area (ie Scotland). The two approaches I am considering are 1: keep a note of children in the database for each record (ie cat 1 would have 2, 3, 4 in its children field) and query against that record with a SELECT * FROM Jobs WHERE Category IN Areas.childrenField. 2: Use a recursive function to find all results who have a relation to the selected area The problems I see from both are 1: holding this data in the db will mean having to keep track of all changes to structure 2: Recursion is slow and inefficent Any ideas, suggestion or recommendations on the best approach? I'm using C# ASP.NET with MSSQL 2005 DB.

    Read the article

  • Hibernate Relationship Mapping/Speed up batch inserts

    - by manyxcxi
    I have 5 MySQL InnoDB tables: Test,InputInvoice,InputLine,OutputInvoice,OutputLine and each is mapped and functioning in Hibernate. I have played with using StatelessSession/Session, and JDBC batch size. I have removed any generator classes to let MySQL handle the id generation- but it is still performing quite slow. Each of those tables is represented in a java class, and mapped in hibernate accordingly. Currently when it comes time to write the data out, I loop through the objects and do a session.save(Object) or session.insert(Object) if I'm using StatelessSession. I also do a flush and clear (when using Session) when my line count reaches the max jdbc batch size (50). Would it be faster if I had these in a 'parent' class that held the objects and did a session.save(master) instead of each one? If I had them in a master/container class, how would I map that in hibernate to reflect the relationship? The container class wouldn't actually be a table of it's own, but a relationship all based on two indexes run_id (int) and line (int). Another direction would be: How do I get Hibernate to do a multi-row insert?

    Read the article

  • SQL Server 2005 FREETEXT() Perfomance Issue

    - by Zenon
    I have a query with about 6-7 joined tables and a FREETEXT() predicate on 6 columns of the base table in the where. Now, this query worked fine (in under 2 seconds) for the last year and practically remained unchanged (i tried old versions and the problem persists) So today, all of a sudden, the same query takes around 1-1.5 minutes. After checking the Execution Plan in SQL Server 2005, rebuilding the FULLTEXT Index of that table, reorganising the FULLTEXT index, creating the index from scratch, restarting the SQL Server Service, restarting the whole server I don't know what else to try. I temporarily switched the query to use LIKE instead until i figure this out (which takes about 6 seconds now). When I look at the query in the query performance analyser, when I compare the ´FREETEXT´query with the ´LIKE´ query, the former has 350 times as many reads (4921261 vs. 13943) and 20 times (38937 vs. 1938) the CPU usage of the latter. So it really is the ´FREETEXT´predicate that causes it to be so slow. Has anyone got any ideas on what the reason might be? Or further tests I could do?

    Read the article

  • How to find intersect rows when condition depend on some columns in one table

    - by user3695637
    Table subscribe subscriber | subscribeto (columns) 1 | 5 1 | 6 1 | 7 1 | 8 1 | 9 1 | 10 2 | 5 2 | 6 2 | 7 There are two users that have id 1 and 2. They subscribe to various user and I inserted these data to table subscribe. Column subscriber indicates who is subscriber and column subscribeto indicates who they've subscribe to. From the above table can conclude that; user id=1 subscribed to 6 users user id=2 subscribed to 3 users I want to find manual of subscription (like Facebook is manual friends) user 1 subscribe to user 5,6,7,8,9,10 user 2 subscribe to user 5,6,7 So, Manual subscription of user 1 and 2 are: 5,6,7 And I'm trying to create SQL statement.. I give you user table for my SQL statement and I think we can use only subscribe table but I can't figure out. Table user userid (columns) 1 2 3 ... ... SQL "select * from user where (select count( 1 ) from subscribe where subscriber = '1' and subscribeto = user.userid) and (select count( 1 ) from subscribe where subscriber = '2' and subscribeto = user.userid);" This SQL can work correctly, but it very slow for thousands of columns. Please provide better SQL for me, Thanks.

    Read the article

  • Best way to store this data?

    - by Malfist
    I have just been assigned to renovate an old website, and I get to move it from some old archaic system to drupal. The only problem is that it's a real-estate system and a lot of data is stored. Currently all the information is stored in a single table, an id represents the house and then everything else is key/value pairs. There are a possible 243 keys per estate, there are 23840 estates in the system. As you can imagine the system is slow and difficult to query. I don't think a table with 243 rows would be a very good idea, and probably worse than the current situation. I've done some investigating and here's what I've found out: Missing data does not indicate a 0 value, data is merged from two, unique sources/formats. Some guessing is involved. I have no control over the source of the data. There are 4 keys that are common to all estates, all values look like something that is commonly searched for and could be indexed There are 10 keys that are in the [90-100)% range 8 of these are information like who's selling it, and it's address. The other two seem to belong with the below range There are 80 keys that are in the [80-90)% range This range seems to mostly just list room types and how many the house has (e.g. bedrooms_possible, bathrooms, family_room_3rd, etc) This range also includes some minor information like school districts, one or two more pieces of data on the address. The 179 keys that are in the [0-80)% range include all sorts of miscellaneous information about the estate My best idea was a hybrid approach, create a table that stores important, common information and keep a smaller key/value table. How would you store this information?

    Read the article

  • Generating all unique crossword puzzle grids

    - by heydenberk
    I want to generate all unique crossword puzzle grids of a certain grid size (4x4 is a good size). All possible puzzles, including non-unique puzzles, are represented by a binary string with the length of the grid area (16 in the case of 4x4), so all possible 4x4 puzzles are represented by the binary forms of all numbers in the range 0 to 2^16. Generating these is easy, but I'm curious if anyone has a good solution for how to programmatically eliminate invalid and duplicate cases. For example, all puzzles with a single column or single row are functionally identical, hence eliminating 7 of those 8 cases. Also, according to crossword puzzle conventions, all squares must be contiguous. I've had success removing all duplicate structures, but my solution took several minutes to execute and probably was not ideal. I'm at something of a loss for how to detect contiguity so if anyone has ideas on this it'd be much appreciated. I'd prefer solutions in python but write in whichever language you prefer. If anyone wants, I can post my python code for generating all grids and removing duplicates, slow as it may be.

    Read the article

< Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >