Search Results

Search found 15637 results on 626 pages for 'memory efficient'.

Page 512/626 | < Previous Page | 508 509 510 511 512 513 514 515 516 517 518 519  | Next Page >

  • Generic dataset handling library

    - by Pep.
    Hello, I want to build a generic Perl module for handling and analysing biomedical character separated datasets and which can, most certain, be used on any kind of datasets that contain a mixture of categorical (A,B,C,..) and continuous (1.2,3,881..) and identifier (XXX1,XXX2...). The plan is to have people initialize the module and then use some arguments to point to the data file(s), the place were the analysis reports should be placed and the structure of the data. By structure of data I mean which variable is in which place and its name/type. And this is where I need some enlightenment. I am baffled how to do this in a clean way. Obviously, having people create a simple schema file, be it XML or some other format would be the cleanest but maybe not all people enjoy doing something like this. The solutions I can think of are: Create a configuration file in XML or similar and with a prespecified format. Pass the information during initialization of the module. Use the first row of the data as headers and try to guess types (ouch) Surely there must be a "canonical" way of doing this that is also usable and efficient. Thanks p.

    Read the article

  • PHP Object References in Frameworks

    - by bigstylee
    Before I dive into the disscusion part a quick question; Is there a method to determine if a variable is a reference to another variable/object? For example $foo = 'Hello World'; $bar = &$foo; echo (is_reference($bar) ? 'Is reference' : 'Is orginal'; I have been using PHP5 for a few years now (personal use only) and I would say I am moderately reversed on the topic of Object Orientated implementation. However the concept of Model View Controller Framework is fairly new to me. I have looked a number of tutorials and looked at some of the open source frameworks (mainly CodeIgnitor) to get a better understanding how everything fits together. I am starting to appreciate the real benefits of using this type of structure. I am used to implementing object referencing in the following technique. class Foo{ public $var = 'Hello World!'; } class Bar{ public function __construct(){ global $Foo; echo $Foo->var; } } $Foo = new Foo; $Bar = new Bar; I was surprised to see that CodeIgnitor and Yii pass referencs of objects and can be accessed via the following method: $this->load->view('argument') The immediate advantage I can see is a lot less code and more user friendly. But I do wonder if it is more efficient as these frameworks are presumably optimised? Or simply to make the code more user friendly? This was an interesting article Do not use PHP references.

    Read the article

  • How to effectively color pixels in a BufferedImage?

    - by Ed Taylor
    I'm using the following pice of code to iterate over all pixels in an image and draw a red 1x1 square over the pixels that are within a certain RGB-tolerance. I guess there is a more efficient way to do this? Any ideas appreciated. (bi is a BufferedImage and g2 is a Graphics2D with its color set to Color.RED). Color targetColor = new Color(selectedRGB); for (int x = 0; x < bi.getWidth(); x++) { for (int y = 0; y < bi.getHeight(); y++) { Color pixelColor = new Color(bi.getRGB(x, y)); if (withinTolerance(pixelColor, targetColor)) { g2.drawRect(x, y, 1, 1); } } } private boolean withinTolerance(Color pixelColor, Color targetColor) { int pixelRed = pixelColor.getRed(); int pixelGreen = pixelColor.getGreen(); int pixelBlue = pixelColor.getBlue(); int targetRed = targetColor.getRed(); int targetGreen = targetColor.getGreen(); int targetBlue = targetColor.getBlue(); return (((pixelRed >= targetRed - tolRed) && (pixelRed <= targetRed + tolRed)) && ((pixelGreen >= targetGreen - tolGreen) && (pixelGreen <= targetGreen + tolGreen)) && ((pixelBlue >= targetBlue - tolBlue) && (pixelBlue <= targetBlue + tolBlue))); }

    Read the article

  • ajax html vs xml/json responses - perfomance or other reasons

    - by pedalpete
    I've got a fairly ajax heavy site and some 3k html formatted pages are inserted into the DOM from ajax requests. What I have been doing is taking the html responses and just inserting the whole thing using jQuery. My other option is to output in xml (or possibly json) and then parse the document and insert it into the page. I've noticed it seems that most larger site do things the json/xml way. Google Mail returns xml rather than formatted html. Is this due to performance? or is there another reason to use xml/json vs just retrieving html? From a javascript standpoint, it would seem injecting direct html is simplest. In jQuery I just do this jQuery.ajax({ type: "POST", url: "getpage.php", data: requestData, success: function(response){ jQuery('div#putItHear').html(response); } with an xml/json response I would have to do jQuery.ajax({ type: "POST", url: "getpage.php", data: requestData, success: function(xml){ $("message",xml).each(function(id) { message = $("message",xml).get(id); $("#messagewindow").prepend(""+$("author",message).text()+ ": "+$("text",message).text()+ ""); }); } }); clearly not as efficient from a code standpoint, and I can't expect that it is better browser performance, so why do things the second way?

    Read the article

  • Point data structure for a sketching application

    - by bebraw
    I am currently developing a little sketching application based on HTML5 Canvas element. There is one particular problem I haven't yet managed to find a proper solution for. The idea is that the user will be able to manipulate existing stroke data (points) quite freely. This includes pushing point data around (ie. magnet tool) and manipulating it at whim otherwise (ie. altering color). Note that the current brush engine is able to shade by taking existing stroke data in count. It's a quick and dirty solution as it just iterates the points in the current stroke and checks them against a distance rule. Now the problem is how to do this in a nice manner. It is extremely important to be able to perform efficient queries that return all points within given canvas coordinate and radius. Other features, such as space usage, should be secondary to this. I don't mind doing some extra processing between strokes while the user is not painting. Any pointers are welcome. :)

    Read the article

  • Tail recursion and memoization with C#

    - by Jay
    I'm writing a function that finds the full path of a directory based on a database table of entries. Each record contains a key, the directory's name, and the key of the parent directory (it's the Directory table in an MSI if you're familiar). I had an iterative solution, but it started looking a little nasty. I thought I could write an elegant tail recursive solution, but I'm not sure anymore. I'll show you my code and then explain the issues I'm facing. Dictionary<string, string> m_directoryKeyToFullPathDictionary = new Dictionary<string, string>(); ... private string ExpandDirectoryKey(Database database, string directoryKey) { // check for terminating condition string fullPath; if (m_directoryKeyToFullPathDictionary.TryGetValue(directoryKey, out fullPath)) { return fullPath; } // inductive step Record record = ExecuteQuery(database, "SELECT DefaultDir, Directory_Parent FROM Directory where Directory.Directory='{0}'", directoryKey); // null check string directoryName = record.GetString("DefaultDir"); string parentDirectoryKey = record.GetString("Directory_Parent"); return Path.Combine(ExpandDirectoryKey(database, parentDirectoryKey), directoryName); } This is how the code looked when I realized I had a problem (with some minor validation/massaging removed). I want to use memoization to short circuit whenever possible, but that requires me to make a function call to the dictionary to store the output of the recursive ExpandDirectoryKey call. I realize that I also have a Path.Combine call there, but I think that can be circumvented with a ... + Path.DirectorySeparatorChar + .... I thought about using a helper method that would memoize the directory and return the value so that I could call it like this at the end of the function above: return MemoizeHelper( m_directoryKeyToFullPathDictionary, Path.Combine(ExpandDirectoryKey(database, parentDirectoryKey)), directoryName); But I feel like that's cheating and not going to be optimized as tail recursion. Any ideas? Should I be using a completely different strategy? This doesn't need to be a super efficient algorithm at all, I'm just really curious. I'm using .NET 4.0, btw. Thanks!

    Read the article

  • Python - Checking for membership inside nested dict

    - by victorhooi
    heya, This is a followup questions to this one: http://stackoverflow.com/questions/2901422/python-dictreader-skipping-rows-with-missing-columns Turns out I was being silly, and using the wrong ID field. I'm using Python 3.x here. I have a dict of employees, indexed by a string, "directory_id". Each value is a nested dict with employee attributes (phone number, surname etc.). One of these values is a secondary ID, say "internal_id", and another is their manager, call it "manager_internal_id". The "internal_id" field is non-mandatory, and not every employee has one. (I've simplified the fields a little, both to make it easier to read, and also for privacy/compliance reasons). The issue here is that we index (key) each employee by their directory_id, but when we lookup their manager, we need to find managers by their "internal_id". Before, when employee.keys() was a list of internal_ids, I was using a membership check on this. Now, the last part of my if statement won't work, since the internal_ids is part of the dict values, instead of the key itself. def lookup_supervisor(manager_internal_id, employees): if manager_internal_idis not None and manager_internal_id!= "" and manager_internal_id in employees.keys(): return (employees[manager_internal_id]['mail'], employees[manager_internal_id]['givenName'], employees[manager_internal_id]['sn']) else: return ('Supervisor Not Found', 'Supervisor Not Found', 'Supervisor Not Found') So the first question is, how do I check whether the manager_internal_id is present in the dict's values. I've tried substituting employee.keys() with employee.values(), that didn't work. Also, I'm hoping for something a little more efficient, not sure if there's a way to get a subset of the values, specifically, all the entries for employees[directory_id]['internal_id']. Hopefully there's some Pythonic way of doing this, without using a massive heap of nested for/if loops. My second question is, how do I then cleanly return the required employee attributes (mail, givenname, surname etc.). My for loop is iterating over each employee, and calling lookup_supervisor. I'm feeling a bit stupid/stumped here. def tidy_data(employees): for directory_id, data in employees.items(): # We really shouldnt' be passing employees back and forth like this - hmm, classes? data['SupervisorEmail'], data['SupervisorFirstName'], data['SupervisorSurname'] = lookup_supervisor(data['manager_internal_id'], employees) Thanks in advance =), Victor

    Read the article

  • javascript: waiting for an iframe page to load before writing to it (but not from the page that's tr

    - by Bill Dawes
    Apologies if this has been answered elsewhere, but I haven't been able to find it referenced. (Probably because nobody else would want to do such a daft thing, I admit). So, I have a page with three iframes in it. An event on one triggers a javascript function which loads new pages into the other two iframes; ['topright'] and ['bottomright']. However, javascript in the page that is being loaded into iframe 'topright' then needs to send information to elements in the 'bottomright' iframe. window.frames['bottomright'].document.subform.ID_client = client; etc But this will only work if the page has fully loaded into the bottomright frame. So what would be the most efficient way for that code in the 'topright' iframe to check and ensure that that form element in the bottomright frame is actually available to write to, before it does write to it? Bearing in mind that the page load has NOT been triggered from the topright frame, so I can't simply use an onLoad function. (I know this probably sounds like a hideously tortuous route for getting data from one page to another, but that's another story. The client is always right, etc...:-))

    Read the article

  • Return call from ggplot object

    - by aL3xa
    I've been using ggplot2 for a while now, and I can't find a way to get formula from ggplot object. Though I can get basic info with summary(<ggplot_object>), in order to get complete formula, usually I was combing up and down through .Rhistory file. And this becomes frustrating when you experiment with new graphs, especially when code gets a bit lengthy... so searching through history file isn't quite convenient way of doing this... Is there a more efficient way of doing this? Just an illustration: p <- qplot(data = mtcars, x = factor(cyl), geom = "bar", fill = factor(cyl)) + scale_fill_manual(name = "Cylinders", value = c("firebrick3", "gold2", "chartreuse3")) + stat_bin(aes(label = ..count..), vjust = -0.2, geom = "text", position = "identity") + xlab("# of cylinders") + ylab("Frequency") + opts(title = "Barplot: # of cylinders") I can get some basic info with summary: > summary(p) data: mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb [32x11] mapping: fill = factor(cyl), x = factor(cyl) scales: fill faceting: facet_grid(. ~ ., FALSE) ----------------------------------- geom_bar: stat_bin: position_stack: (width = NULL, height = NULL) mapping: label = ..count.. geom_text: vjust = -0.2 stat_bin: width = 0.9, drop = TRUE, right = TRUE position_identity: (width = NULL, height = NULL) But I want to get code I typed in to get the graph. I reckon that I'm missing something essential here... it's seems impossible that there's no way to get call from ggplot object!

    Read the article

  • Building asynchronous cache pattern with JSP

    - by merweirdo
    I have a JSP that will take some 8 minutes to render. The code logic itself can not be made more efficient (it will update often and be updated by basically a pointy haired boss). I tried wrapping it with a caching layer like <%@ taglib uri="/WEB-INF/classes/oscache.tld" prefix="oscache" %> <oscache:cache time="60"> <div class="pagecontent"> ..... my logic </div> </oscache:cache> This is nice until the 60 seconds is over. The next query after that blocks until the 8 minutes of rendering is done with again. I would need a way to build a pattern something like: If there is no version of the dynamic content in the cache run the actual logic (and populate the cache for subsequent requests) If there is a non-expired version of the dynamic content in the cache serve the output of the JSP logic from the cache If there is an expired version of the dynamic content in the cache serve the output of the JSP logic still from the cache AND run the JSP logic in the background so that the cache gets updated transparently to the user - avoiding the user have to wait for 8 minutes I found out that at least EHCache might be able to do some asynchronous cache updating but it did not sadly seem to apply to the JSP tags... Also I have to take in 10-20 parameters for the actual logic of the JSP and some of them should be used as a key for caching. Code example and/or pointers would be greatly appreciated. I do not frankly care if the solution provided is extremely ugly. I just want a simple 5 minute caching with asynchronous cache update taking into account some parameters as a key.

    Read the article

  • What's the difference between these LINQ queries ?

    - by SnAzBaZ
    I use LINQ-SQL as my DAL, I then have a project called DB which acts as my BLL. Various applications then access the BLL to read / write data from the SQL Database. I have these methods in my BLL for one particular table: public IEnumerable<SystemSalesTaxList> Get_SystemSalesTaxList() { return from s in db.SystemSalesTaxLists select s; } public SystemSalesTaxList Get_SystemSalesTaxList(string strSalesTaxID) { return Get_SystemSalesTaxList().Where(s => s.SalesTaxID == strSalesTaxID).FirstOrDefault(); } public SystemSalesTaxList Get_SystemSalesTaxListByZipCode(string strZipCode) { return Get_SystemSalesTaxList().Where(s => s.ZipCode == strZipCode).FirstOrDefault(); } All pretty straight forward I thought. Get_SystemSalesTaxListByZipCode is always returning a null value though, even when it has a ZIP Code that exists in that table. If I write the method like this, it returns the row I want: public SystemSalesTaxList Get_SystemSalesTaxListByZipCode(string strZipCode) { var salesTax = from s in db.SystemSalesTaxLists where s.ZipCode == strZipCode select s; return salesTax.FirstOrDefault(); } Why does the other method not return the same, as the query should be identical ? Note that, the overloaded Get_SystemSalesTaxList(string strSalesTaxID) returns a record just fine when I give it a valid SalesTaxID. Is there a more efficient way to write these "helper" type classes ? Thanks!

    Read the article

  • Wrappers of primitive types in arraylist vs arrays

    - by ismail marmoush
    Hi, In "Core java 1" I've read CAUTION: An ArrayList is far less efficient than an int[] array because each value is separately wrapped inside an object. You would only want to use this construct for small collections when programmer convenience is more important than efficiency. But in my software I've already used Arraylist instead of normal arrays due to some requirements, though "The software is supposed to have high performance and after I've read the quoted text I started to panic!" one thing I can change is changing double variables to Double so as to prevent auto boxing and I don't know if that is worth it or not, in next sample algorithm public void multiply(final double val) { final int rows = getSize1(); final int cols = getSize2(); for (int i = 0; i < rows; i++) { for (int j = 0; j < cols; j++) { this.get(i).set(j, this.get(i).get(j) * val); } } } My question is does changing double to Double makes a difference ? or that's a micro optimizing that won't affect anything ? keep in mind I might be using large matrices.2nd Should I consider redesigning the whole program again ?

    Read the article

  • Calling DI Container directly in method code (MVC Actions)

    - by fearofawhackplanet
    I'm playing with DI (using Unity). I've learned how to do Constructor and Property injection. I have a static container exposed through a property in my Global.asax file (MvcApplication class). I have a need for a number of different objects in my Controller. It doesn't seem right to inject these throught the constructor, partly because of the high quantity of them, and partly because they are only needed in some Actions methods. The question is, is there anything wrong with just calling my container directly from within the Action methods? public ActionResult Foo() { IBar bar = (Bar)MvcApplication.Container.Resolve(IBar); // ... Bar uses a default constructor, I'm not actually doing any // injection here, I'm just telling my conatiner to give me Bar // when I ask for IBar so I can hide the existence of the concrete // Bar from my Controller. } This seems the simplest and most efficient way of doing things, but I've never seen an example used in this way. Is there anything wrong with this? Am I missing the concept in some way?

    Read the article

  • Writing a VM - well formed bytecode?

    - by David Titarenco
    Hi, I'm writing a virtual machine in C just for fun. Lame, I know, but luckily I'm on SO so hopefully no one will make fun :) I wrote a really quick'n'dirty VM that reads lines of (my own) ASM and does stuff. Right now, I only have 3 instructions: add, jmp, end. All is well and it's actually pretty cool being able to feed lines (doing it something like write_line(&prog[1], "jmp", regA, regB, 0); and then running the program: while (machine.code_pointer <= BOUNDS && DONE != true) { run_line(&prog[machine.cp]); } I'm using an opcode lookup table (which may not be efficient but it's elegant) in C and everything seems to be working OK. My question is more of a "best practices" question but I do think there's a correct answer to it. I'm making the VM able to read binary files (storing bytes in unsigned char[]) and execute bytecode. My question is: is it the VM's job to make sure the bytecode is well formed or is it just the compiler's job to make sure the binary file it spits out is well formed? I only ask this because what would happen if someone would edit a binary file and screw stuff up (delete arbitrary parts of it, etc). Clearly, the program would be buggy and probably not functional. Is this even the VM's problem? I'm sure that people much smarter than me have figured out solutions to these problems, I'm just curious what they are!

    Read the article

  • Help ! How do I get the total number rows from my SQL Server paging procedure ?

    - by The_AlienCoder
    Ok I have a table in my SQL Server database that stores comments. My desire is to be able to page though the records using [Back],[Next], page numbers & [Last] buttons in my data list. I figured the most efficient way was to use a stored procedure that only returns a certain number of rows within a particular range. Here is what I came up with @PageIndex INT, @PageSize INT, @postid int AS SET NOCOUNT ON begin WITH tmp AS ( SELECT comments.*, ROW_NUMBER() OVER (ORDER BY dateposted ASC) AS Row FROM comments WHERE (comments.postid = @postid)) SELECT tmp.* FROM tmp WHERE Row between (@PageIndex - 1) * @PageSize + 1 and @PageIndex*@PageSize end RETURN Now everything works fine and I have been able implement [Next] and [Back] buttons in my data list pager. Now I need the total number of all comments (not in the current page) so that I can implement my page numbers and the[Last] button on my pager. In other words I want to return the total number of rows in my first select statement i.e WITH tmp AS ( SELECT comments.*, ROW_NUMBER() OVER (ORDER BY dateposted ASC) AS Row FROM comments WHERE (comments.postid = @postid)) set @TotalRows = @@rowcount @@rowcount doesn't work and raises an error. I also cant get count.* to work either. Is there another way to get the total amount of rows or is my approach doomed.

    Read the article

  • How to check if two records have a self-referencing relation?

    - by Machine
    Consider the following schema with users and their collegues (friends): Users User: columns: user_id: name: user_id as userId type: integer(8) unsigned: 1 primary: true autoincrement: true first_name: name: first_name as firstName type: string(45) notnull: true last_name: name: last_name as lastName type: string(45) notnull: true email: type: string(45) notnull: true unique: true relations: Collegues: class: User local: invitor foreign: invitee refClass: CollegueStatus equal: true onDelete: CASCADE onUpdate: CASCADE Join table: CollegueStatus: columns: invitor: type: integer(8) unsigned: 1 primary: true invitee: type: integer(8) unsigned: 1 primary: true status: type: enum(8) values: [pending, accepted, denied] default: pending notnull: true Now, let's say I two records, one for the user making a HTTP request (the logged in user), and one record for a user he wants to send a message to. I want to check if these users are collegues. Questions: Does Doctrine have any pre-build functionality to check if two records with with self-relations are related? If not, how would you write a method to check this? Where would you put said method? (In the User-class, UserTable-class etc) I could probably do something like this: public function (User $user1, User $user2) { // Ensure we load collegues if $user1 was fetched with DQL that // doesn't load this relation $collegues = $user1->get('Collegues'); $areCollegues = false; foreach($collegues as $collegue) { if($collegue['userId'] === $user2['userId']) { $areCollegues = true; break; } } return $areCollegues; } But this looks a neither efficient nor pretty. I just feel that it should be solved already for self-referencing relations to be nice to use.

    Read the article

  • Good strategy for copying a "sliding window" of data from a table?

    - by chiborg
    I have a MySQL table from a third-party application that has millions of rows and only one index - the timestamp of each entry. Now I want to do some heavy self-joins and queries on the data using fields other than the timestamp. Doing the query on the original table would bring the database to a crawl, adding indexes to the table is not an option. Additionally, I only need entries that are newer than one week. My current strategy for doing the queries efficiently is to use a separate table (aux_table) that has the necessary indexes. My questions are: Is there another way to do the queries? and if not, How do I update the data in the indexed table efficiently? So far I have found two approaches for updating aux_table: Truncate aux_table and insert the desired data from the original table. Not very efficient because all the indexes must be re-crated. Check for the biggest timestamp in aux_table and insert all entries with a greater or equal timestamp from the original table. Occasionally drop older entries. Only copying entries with greater timestamp leads to dropped entries (because of entries with same timestamp that were inserted into the original table after the last update).

    Read the article

  • Web development scheme for staging and production servers using Git Push

    - by ServAce85
    I am using git to manage a dynamic website (PHP + MySQL) and I want to send my files from my localhost to my staging and development servers in the most efficient and hassle-free way. I am currently convinced that the best way for me to approach this problem is to use this git branching model to organize my local git repo. From there, I will use the release branches to push to my staging server for testing. Once I am happy that the release code works on the staging server, I can then merge with my master branch and push that to my production server. Pushing to Staging Server: As noted in many introductory git posts, I could run into problems pushing into a non-bare repo, so, as suggested in this response, I plan to push the release branch to a bare repo on the server and have a post-receive hook that clones the bare repo to a non-bare repo that also acts as the web-hosted directory. Pushing to Production Server: Here's my newest source of confusion... In the response that I cited above, it made me curious as to why @Paul states that it's a completely different story when pushing to a live, development server. I guess I don't see the problem. Would it be safe and hassle-free to follow the same steps as above, but for the master branch? Where are the potential pit-falls? Config Files: With respect to configuration files that are unique to each environment (.htaccess, config.php, etc), it seems simplest to .gitignore each of those files in their respective repos on their respective servers. Can you see anything immediately wrong with this? Better solutions? Accessing Data: Finally, as I initially stated, the site uses MySQL databases to store data. How would you suggest I access that data (for testing purposes) from the staging server and localhost? I realize that I may have asked way too many questions for a single post, but since they're all related to the best way to set up this development scheme, I thought it was necessary.

    Read the article

  • How can I get this week's dates in Perl?

    - by ABach
    I have the following loop to calculate the dates of the current week and print them out. It works, but I am swimming in the amount of date/time possibilities in Perl and want to get your opinion on whether there is a better way. Here's the code I've written: #!/usr/bin/env perl use warnings; use strict; use DateTime; # Calculate numeric value of today and the # target day (Monday = 1, Sunday = 7); the # target, in this case, is Monday, since that's # when I want the week to start my $today_dt = DateTime->now; my $today = $today_dt->day_of_week; my $target = 1; # Create DateTime copies to act as the "bookends" # for the date range my ($start, $end) = ($today_dt->clone(), $today_dt->clone()); if ($today == $target) { # If today is the target, "start" is already set; # we simply need to set the end date $end->add( days => 6 ); } else { # Otherwise, we calculate the Monday preceeding today # and the Sunday following today my $delta = ($target - $today + 7) % 7; $start->add( days => $delta - 7 ); $end->add( days => $delta - 1 ); } # I clone the DateTime object again because, for some reason, # I'm wary of using $start directly... my $cur_date = $start->clone(); while ($cur_date <= $end) { my $date_ymd = $cur_date->ymd; print "$date_ymd\n"; $cur_date->add( days => 1 ); } As mentioned, this works, but is it the quickest or most efficient? I'm guessing that quickness and efficiency may not necessarily go together, but your feedback is very appreciated.

    Read the article

  • Perl - CodeGolf - Nested loops & SQL inserts

    - by CheeseConQueso
    I had to make a really small and simple script that would fill a table with string values according to these criteria: 2 characters long 1st character is always numeric (0-9) 2nd character is (0-9) but also includes "X" Values need to be inserted into a table on a database The program would execute: insert into table (code) values ('01'); insert into table (code) values ('02'); insert into table (code) values ('03'); insert into table (code) values ('04'); insert into table (code) values ('05'); insert into table (code) values ('06'); insert into table (code) values ('07'); insert into table (code) values ('08'); insert into table (code) values ('09'); insert into table (code) values ('0X'); And so on, until the total 110 values were inserted. My code (just to accomplish it, not to minimize and make efficient) was: use strict; use DBI; my ($db1,$sql,$sth,%dbattr); %dbattr=(ChopBlanks => 1,RaiseError => 0); $db1=DBI->connect('DBI:mysql:','','',\%dbattr); my @code; for(0..9) { $code[0]=$_; for(0..9) { $code[1]=$_; insert(@code); } insert($code[0],"X"); } sub insert { my $skip=0; foreach(@_) { if($skip==0) { $sql="insert into table (code) values ('".$_[0].$_[1]."');"; $sth=$db1->prepare($sql); $sth->execute(); $skip++; } else { $skip--; } } } exit; I'm just interested to see a really succinct & precise version of this logic.

    Read the article

  • one table is shared between several websites

    - by sami
    I have a static table that's shared by several websites. By static, I mean that the data is read but never updated by the websites. Currently, all websites are served from the same server but that may change. I want to minimize the need for creating/maintaining this table for each of the websites, so I thought about turning it to an xml file that's stored in a shared library that all websites have access to. The problem is I use an ORM and use forign key constraints to ensure integrity of the ids used from that table, so by removing that table out of the MySQL database into an XML file, will this affect the integrity of the ids coming from that table? My table looks like this <table name="entry"> <column name="id" type="INTEGER" primaryKey="true" autoIncrement="true" /> <column name="title" type="VARCHAR" size="500" required="true" /> </table> and I use it as a foreign key in other tables <table name="refer"> <column name="id" type="INTEGER" primaryKey="true" autoIncrement="true" /> <column name="linkto" type="INTEGER"/> <foreign-key foreignTable="entry"> <reference local="linkto" foreign="id" /> </foreign-key> </table> So I'm wondering if I remove that table out of the database, is there a way to retain that referential integrity? And of course are these any other efficient ways to do the same thing? I just don't want to have to repeat that table for several websites.

    Read the article

  • jQuery image fader slow in IE6 & 7

    - by Jamie
    Hi guys, I'm using the following jQuery script to rotate through a series of images pulled into an unordered list using PHP: function theRotator() { $('#rotator li').css({opacity: 0.0}); $('#rotator li:first').css({opacity: 1.0}); setInterval('rotate()',5000); }; function rotate() { var current = ($('#rotator li.show') ? $('#rotator li.show') : $('#rotator li:first')); var next = ((current.next().length) ? ((current.next().hasClass('show')) ? $('#rotator li:first') :current.next()) : $('#rotator li:first')); next.css({opacity: 0.0}).addClass('show').animate({opacity: 1.0}, 2000); current.animate({opacity: 0.0}, 2000).removeClass('show'); }; $(document).ready(function() { theRotator(); }); It works brilliantly in FF, Safari, Chrome and even IE8 but IE6 & 7 are really slow. Can anyone make any suggestions on making it more efficient or just work better in IE6 & 7? The script is from here btw. Thanks.

    Read the article

  • Need Help finding an appropriate task asignment algoritm for a collage project involving coordinatin

    - by Trif Mircea
    Hello. I am a long time lurker here and have found over time many answers regarding jquery and web development topics so I decided to ask a question of my own. This time I have to create a c++ project for collage which should help manage the workflow of a company providing all kinds of services through in the field teams. The ideas I have so far are: client-server application; the server is a dispatcher where all the orders from clients get and the clients are mobile devices (PDAs) each team in the field having one a order from a client is a task. Each task is made up of a series of subtasks. You have a database with estimations on how long a task should take to complete you also know what tasks or subtasks each team on the field can perform based on what kind of specialists made up the team (not going to complicate the problem by adding needed materials, it is considered that if a member of a team can perform a subtask he has the stuff needed) Now knowing these factors, what would a good task assignment algorithm be? The criteria is: how many tasks can a team do, how many tasks they have in the queue, it could also be location, how far away are they from the place but I don't think I can implement that.. It needs to be efficient and also to adapt quickly is the human dispatcher manually assigns a task. Any help or leads would be really appreciated. Also I'm not 100% sure in the idea so if you have another way you would go about creating such an application please share, even if it just a quick outline. I have to write a theoretical part too so even if the ideas are far more complex that what i outlined that would be ok ; I'd write those and implement what I can.

    Read the article

  • Recursion in prepared statements

    - by Rob
    I've been using PDO and preparing all my statements primarily for security reasons. However, I have a part of my code that does execute the same statement many times with different parameters, and I thought this would be where the prepared statements really shine. But they actually break the code... The basic logic of the code is this. function someFunction($something) { global $pdo; $array = array(); static $handle = null; if (!$handle) { $handle = $pdo->prepare("A STATEMENT WITH :a_param"); } $handle->bindValue(":a_param", $something); if ($handle->execute()) { while ($row = $handle->fetch()) { $array[] = someFunction($row['blah']); } } return $array; } It looked fine to me, but it was missing out a lot of rows. Eventually I realised that the statement handle was being changed (executed with different param), which means the call to fetch in the while loop will only ever work once, then the function calls itself again, and the result set is changed. So I am wondering what's the best way of using PDO prepared statements in a recursive way. One way could be to use fetchAll(), but it says in the manual that has a substantial overhead. The whole point of this is to make it more efficient. The other thing I could do is not reuse a static handle, and instead make a new one every time. I believe that since the query string is the same, internally the MySQL driver will be using a prepared statement anyway, so there is just the small overhead of creating a new handle on each recursive call. Personally I think that defeats the point. Or is there some way of rewriting this?

    Read the article

  • Determining polygon intersection and containment

    - by Victor Liu
    I have a set of simple (no holes, no self-intersections) polygons, and I need to check that they don't intersect each other (one can be entirely contained in another; that is okay). I can check this by simply checking the per-vertex inside-ness of one polygon versus other polygons. I also need to determine the containment tree, which is the set of relationships that say which polygon contains any given polygon. Since no polygon can intersect any other, then any contained polygon has a unique container; the "next-bigger" one. In other words, if A contains B contains C, then A is the parent of B, and B is the parent of C, and we don't consider A the parent of C. The question: How do I efficiently determine the containment relationships and check the non-intersection criterion? I ask this as one question because maybe a combined algorithm is more efficient than solving each problem separately. The algorithm should take as input a list of polygons, given by a list of their vertices. It should produce a boolean B indicating if none of the polygons intersect any other polygon, and also if B = true, a list of pairs (P, C) where polygon P is the parent of child C. This is not homework. This is for a hobby project I am working on.

    Read the article

< Previous Page | 508 509 510 511 512 513 514 515 516 517 518 519  | Next Page >