Search Results

Search found 8344 results on 334 pages for 'count'.

Page 305/334 | < Previous Page | 301 302 303 304 305 306 307 308 309 310 311 312  | Next Page >

  • Losing reference to $_post variable?

    - by Scott B
    In the code below, the echo at the top returns true, but the echo at the bottom returns nothing. Apparently the code in between is causing me to lose a reference to the $_post variable? <?php echo "in category: ".in_category('is-sidebar', $_post); //RETURNS TRUE if (!get_option('my_hide_recent')) { $cat=get_cat_ID('top-menu'); $catHidden=get_cat_ID('hidden'); $myquery = new WP_Query(); $myquery->query(array( 'cat' => "-$cat,-$catHidden", 'post_not_in' => get_option('sticky_posts') )); $myrecentpostscount = $myquery->found_posts; if ($myrecentpostscount > 0) { ?> <div class="menu"><h4><?php if ($my_sidebar_heading_recent !=="") { echo $my_sidebar_heading_recent; } else { echo "Recent Posts";} ?></h4><ul> <?php global $post; $current_page_recent = get_post( $current_page ); $myrecentposts = get_posts(array('post_not_in' => get_option('sticky_posts'), 'cat' => "-$cat,-$catHidden",'showposts' => $my_recent_count)); foreach($myrecentposts as $idxrecent=>$post) { if($post->ID == $current_page_recent->ID) { $home_menu_recent = ' class="current_page_item'; } else { $home_menu_recent = ' class="page_item'; } $myclassrecent = ($idxrecent == count($myrecentposts) - 1 ? $home_menu_recent.' last"' : $home_menu_recent.'"'); ?> <li<?php echo $myclassrecent ?>><a href="<?php the_permalink(); ?>"><?php the_title(); ?></a></li> <?php } ; if (($myrecentpostscount > $my_recent_count) && $my_recent_count > -1){ ?><li><a href="<?php bloginfo('url'); ?>/site-map">View all</a></li><?php } ?></ul></div> <?php } } global $sitemap; echo "in category: ".in_category('is-sidebar', $_post); //RETURNS NOTHING

    Read the article

  • String manipulation in Linux kernel module

    - by user577066
    I am having a hard time in manipulating strings while writing module for linux. My problem is that I have a int Array[10] with different values in it. I need to produce a string to be able send to the buffer in my_read procedure. If my array is {0,1,112,20,4,0,0,0,0,0} then my output should be: 0:(0) 1:-(1) 2:-------------------------------------------------------------------------------------------------------(112) 3:--------------------(20) 4:----(4) 5:(0) 6:(0) 7:(0) 8:(0) 9:(0) when I try to place the above strings in char[] arrays some how weird characters end up there here is the code int my_read (char *page, char **start, off_t off, int count, int *eof, void *data) { int len; if (off > 0){ *eof =1; return 0; } /* get process tree */ int task_dep=0; /* depth of a task from INIT*/ get_task_tree(&init_task,task_dep); char tmp[1024]; char A[ProcPerDepth[0]],B[ProcPerDepth[1]],C[ProcPerDepth[2]],D[ProcPerDepth[3]],E[ProcPerDepth[4]],F[ProcPerDepth[5]],G[ProcPerDepth[6]],H[ProcPerDepth[7]],I[ProcPerDepth[8]],J[ProcPerDepth[9]]; int i=0; for (i=0;i<1024;i++){ tmp[i]='\0';} memset(A, '\0', sizeof(A));memset(B, '\0', sizeof(B));memset(C, '\0', sizeof(C)); memset(D, '\0', sizeof(D));memset(E, '\0', sizeof(E));memset(F, '\0', sizeof(F)); memset(G, '\0', sizeof(G));memset(H, '\0', sizeof(H));memset(I, '\0', sizeof(I));memset(J, '\0', sizeof(J)); printk("A:%s\nB:%s\nC:%s\nD:%s\nE:%s\nF:%s\nG:%s\nH:%s\nI:%s\nJ:%s\n",A,B,C,D,E,F,G,H,I,J); memset(A,'-',sizeof(A)); memset(B,'-',sizeof(B)); memset(C,'-',sizeof(C)); memset(D,'-',sizeof(D)); memset(E,'-',sizeof(E)); memset(F,'-',sizeof(F)); memset(G,'-',sizeof(G)); memset(H,'-',sizeof(H)); memset(I,'-',sizeof(I)); memset(J,'-',sizeof(J)); printk("A:%s\nB:%s\nC:%s\nD:%s\nE:%s\nF:%s\nG:%s\nH:%s\nI:%s\nJ:%\n",A,B,C,D,E,F,G,H,I,J); len = sprintf(page,"0:%s(%d)\n1:%s(%d)\n2:%s(%d)\n3:%s(%d)\n4:%s(%d)\n5:%s(%d)\n6:%s(%d)\n7:%s(%d)\n8:%s(%d)\n9:%s(%d)\n",A,ProcPerDepth[0],B,ProcPerDepth[1],C,ProcPerDepth[2],D,ProcPerDepth[3],E,ProcPerDepth[4],F,ProcPerDepth[5],G,ProcPerDepth[6],H,ProcPerDepth[7],I,ProcPerDepth[8],J,ProcPerDepth[9]); return len; }

    Read the article

  • Understanding VS2010 C# parallel profiling results

    - by Haggai
    I have a program with many independent computations so I decided to parallelize it. I use Parallel.For/Each. The results were okay for a dual-core machine - CPU utilization of about 80%-90% most of the time. However, with a dual Xeon machine (i.e. 8 cores) I get only about 30%-40% CPU utilization, although the program spends quite a lot of time (sometimes more than 10 seconds) on the parallel sections, and I see it employs about 20-30 more threads in those sections compared to serial sections. Each thread takes more than 1 second to complete, so I see no reason for them to work in parallel - unless there is a synchronization problem. I used the built-in profiler of VS2010, and the results are strange. Even though I use locks only in one place, the profiler reports that about 85% of the program's time is spent on synchronization (also 5-7% sleep, 5-7% execution, under 1% IO). The locked code is only a cache (a dictionary) get/add: bool esn_found; lock (lock_load_esn) esn_found = cache.TryGetValue(st, out esn); if(!esn_found) { esn = pData.esa_inv_idx.esa[term_idx]; esn.populate(pData.esa_inv_idx.datafile); lock (lock_load_esn) { if (!cache.ContainsKey(st)) cache.Add(st, esn); } } lock_load_esn is a static member of the class of type Object. esn.populate reads from a file using a separate StreamReader for each thread. However, when I press the Synchronization button to see what causes the most delay, I see that the profiler reports lines which are function entrance lines, and doesn't report the locked sections themselves. It doesn't even report the function that contains the above code (reminder - the only lock in the program) as part of the blocking profile with noise level 2%. With noise level at 0% it reports all the functions of the program, which I don't understand why they count as blocking synchronizations. So my question is - what is going on here? How can it be that 85% of the time is spent on synchronization? How do I find out what really is the problem with the parallel sections of my program? Thanks.

    Read the article

  • jQuery: Counter, Tricky problem with effects for brainy people.

    - by Marius
    Hello there! I made this counter that I think is cool because it only makes visible changes to the numbers being changed between each time it is triggered. This is the code // counter $('a').click(function(){ var u = ''; var newStr = ''; $('.counter').each(function(){ var len = $(this).children('span').length; $(this).children('span').each(function(){ u = u + $(this).text(); }); v = parseInt(u) + 1; v = v + ''; for (i=v.length - 1; i >= 0; i--) { if (v.charAt(i) == u.charAt(i)) { break; } } slce = len - (v.length - (i + 1)) updates = $(this).children('span').slice(slce); $(updates).fadeTo(100,0).delay(100).each(function(index){ f = i + 1 + index; $(this).text(v.charAt(f)); }).fadeTo(100,1); }); }); Markup: <span class="counter"> <span></span><span></span><span></span><span></span><span></span><span></span><span style="margin-right:4px;">9</span><span>9</span><span>9</span><span>9</span> </span> <a href="">Add + 1</a> The problem is that I previously used queue() function to delay() $(this).text(v.charAt(f)); by 100ms, (without queue the text()-function would not be delayed because it isnt in the fx catergory) so that the text would be updated before the object had faded to opacity = 0. That would look stupid. When adding multiple counters, only one of the counters would count. When removing the queue function, both counters would work, but as you can imagine, the delay of the text() would be gone (as it isnt fx-category). It is probably a bit tricky to make out how I can have multiple counters, and still delay the text()-function by 100ms, but I was hoping there is somebody here with spare brain capacity for these things ;) You can see a (NSFW) problem demo here: Just look underneath the sharing icons and you will notice that the text changes WHILE the objects fade out. Looking for some help to sove this problem. I would like to call the text() function once the text has faded to opacity 0, then fade in once the text() has executed. Thank you for your time.

    Read the article

  • In a DDD approach, is this example modelled correctly?

    - by Tag
    Just created an acc on SO to ask this :) Assuming this simplified example: building a web application to manage projects... The application has the following requirements/rules. 1) Users should be able to create projects inserting the project name. 2) Project names cannot be empty. 3) Two projects can't have the same name. I'm using a 4-layered architecture (User Interface, Application, Domain, Infrastructure). On my Application Layer i have the following ProjectService.cs class: public class ProjectService { private IProjectRepository ProjectRepo { get; set; } public ProjectService(IProjectRepository projectRepo) { ProjectRepo = projectRepo; } public void CreateNewProject(string name) { IList<Project> projects = ProjectRepo.GetProjectsByName(name); if (projects.Count > 0) throw new Exception("Project name already exists."); Project project = new Project(name); ProjectRepo.InsertProject(project); } } On my Domain Layer, i have the Project.cs class and the IProjectRepository.cs interface: public class Project { public int ProjectID { get; private set; } public string Name { get; private set; } public Project(string name) { ValidateName(name); Name = name; } private void ValidateName(string name) { if (name == null || name.Equals(string.Empty)) { throw new Exception("Project name cannot be empty or null."); } } } public interface IProjectRepository { void InsertProject(Project project); IList<Project> GetProjectsByName(string projectName); } On my Infrastructure layer, i have the implementation of IProjectRepository which does the actual querying (the code is irrelevant). I don't like two things about this design: 1) I've read that the repository interfaces should be a part of the domain but the implementations should not. That makes no sense to me since i think the domain shouldn't call the repository methods (persistence ignorance), that should be a responsability of the services in the application layer. (Something tells me i'm terribly wrong.) 2) The process of creating a new project involves two validations (not null and not duplicate). In my design above, those two validations are scattered in two different places making it harder (imho) to see whats going on. So, my question is, from a DDD perspective, is this modelled correctly or would you do it in a different way?

    Read the article

  • How To Create A Download Quota.

    - by snikolov
    I need to create an handy file down loader which will count the amount of bytes downloaded and stop when it has exceed a preset limit. i need to mirror some files but i only have 7 gb per moth of bandwidth and i dont want to exceed the limit. Example limits can be in bytes or number of files, each user has their own limit, as well as a limit for Download Quota itself. So if you set a limit of 2 gigabytes for Download Quota, downloads stop at 2 gigabytes, even if you have 3 users with a limit of 1 gigabyte each. if ($range) { //pass client Range header to rapidshare // _insert($range); $cookie .= "\r\nRange: $range"; $multipart = true; header("X-UR-RANGE-Range: $range"); } //octet-stream + attachment => client always stores file header('Content-type: application/octet-stream'); header('Content-Disposition: attachment; filename="' . $fn . '"'); //always included so clients know this script supports resuming header("Accept-Ranges: bytes"); //awful hack to pass rapidshare the premium cookie $user_agent = ini_get("user_agent"); ini_set("user_agent", $user_agent . "\r\nCookie: enc=$cookie"); $httphandle = fopen($url, "r"); $headers = stream_get_meta_data($httphandle); //let's check the return header of rapidshare for range / length indicators //we'll just pass these to the client foreach ($headers["wrapper_data"] as $header) { $header = trim($header); if (substr(strtolower($header), 0, strlen("content-range")) == "content-range") { // _insert($range); header($header); header("X-RS-RANGE-" . $header); $multipart = true; //content-range indicates partial download } elseif (substr(strtolower($header), 0, strlen("Content-Length")) == "content-length") { // _insert($range); header($header); header("X-RS-CL-" . $header); } } //now show the client he has a partial download if ($multipart) header('HTTP/1.1 206 Partial Content'); flush(); $download_rate = 100; while (!feof($httphandle)) { // send the current file part to the browser $var_stat = fread($httphandle, round($download_rate * 1024)); $var12 = strlen($var_stat); ////////////////////////////////// echo $var_stat; ///////////////////////////////// // flush the content to the browser flush(); // sleep one second sleep(1); }

    Read the article

  • Anyway that I can get the values of a PHP array through ajax?

    - by Jerry
    Hi All, I am trying to build a shopping cart site. When a user click add to cart image on the product page, The product title will show a "The product is in your cart" text without reloading the page. I am using session and ajax but no luck so far. I appreciate any helps. My html cold <table id="<?php echo $productId; ?>" width="594" border="0" cellpadding="5" cellspacing="0"> <tr> <td><img src="<?php echo "$brandImage"; ?></td> <td <?php echo $productName; ?> //The "The product is in your cart" will be showed here</td> </tr> <tr> <td><a class="addToCart" href="javascript:void(0);" onclick="addToCart(<?php echo $productId?>)"> </td> My Javascript file (addToCart.js) function addToCart(productId){ var url="addToCart.php"; url=url+"?productId="+productId; url=url+"&sid="+Math.random(); $.post( url, function(responseText){ alert(responseText); //I wish I can get productData value from addToCart.php }, "html" ) My php file (addToCart.php) <?php SESSION_START(); $productId=$_GET['productId']; $cart=$_SESSION['cart']; if(isset($cart)){ $cart.=",".$productId; $product=explode(',',$cart); $totalItem=count($product); }else{ $cart=$productId; $totalItem=1; }; $productData=array(); foreach($product as $id){ $productData[$id]=(isset($productData[$id])) ? $productData[$id]+1 :1; }; $_SESSION['cart']=$cart; //print_r($productData); echo $productData; //Not sure what to do to send $productData back to my addToCart.js variable ?> I tried to make the code look simple. Any suggestion will be a great help. Thanks

    Read the article

  • php split array into smaller even arrays

    - by SoulieBaby
    I have a function that is supposed to split my array into smaller, evenly distributed arrays, however it seems to be duplicating my data along the way. If anyone can help me out that'd be great. Here's the original array: Array ( [0] => stdClass Object ( [bid] => 42 [name] => Ray White Mordialloc [imageurl] => sp_raywhite.gif [clickurl] => http://www.raywhite.com/ ) [1] => stdClass Object ( [bid] => 48 [name] => Beachside Osteo [imageurl] => sp_beachside.gif [clickurl] => http://www.beachsideosteo.com.au/ ) [2] => stdClass Object ( [bid] => 53 [name] => Carmotive [imageurl] => sp_carmotive.jpg [clickurl] => http://www.carmotive.com.au/ ) [3] => stdClass Object ( [bid] => 51 [name] => Richmond and Bennison [imageurl] => sp_richmond.jpg [clickurl] => http://www.richbenn.com.au/ ) [4] => stdClass Object ( [bid] => 50 [name] => Letec [imageurl] => sp_letec.jpg [clickurl] => www.letec.biz ) [5] => stdClass Object ( [bid] => 39 [name] => Main Street Mordialloc [imageurl] => main street cafe.jpg [clickurl] => ) [6] => stdClass Object ( [bid] => 40 [name] => Ripponlea Mitsubishi [imageurl] => sp_mitsubishi.gif [clickurl] => ) [7] => stdClass Object ( [bid] => 34 [name] => Adrianos Pizza & Pasta [imageurl] => sp_adrian.gif [clickurl] => ) [8] => stdClass Object ( [bid] => 59 [name] => Pure Sport [imageurl] => sp_psport.jpg [clickurl] => http://www.puresport.com.au/ ) [9] => stdClass Object ( [bid] => 33 [name] => Two Brothers [imageurl] => sp_2brothers.gif [clickurl] => http://www.2brothers.com.au/ ) [10] => stdClass Object ( [bid] => 52 [name] => Mordialloc Travel and Cruise [imageurl] => sp_morditravel.jpg [clickurl] => http://www.yellowpages.com.au/vic/mordialloc/mordialloc-travel-cruise-13492525-listing.html ) [11] => stdClass Object ( [bid] => 57 [name] => Southern Suburbs Physiotherapy Centre [imageurl] => sp_sspc.jpg [clickurl] => http://www.sspc.com.au ) [12] => stdClass Object ( [bid] => 54 [name] => PPM Builders [imageurl] => sp_ppm.jpg [clickurl] => http://www.hotfrog.com.au/Companies/P-P-M-Builders ) [13] => stdClass Object ( [bid] => 36 [name] => Big River [imageurl] => sp_bigriver.gif [clickurl] => ) [14] => stdClass Object ( [bid] => 35 [name] => Bendigo Bank Parkdale / Mentone East [imageurl] => sp_bendigo.gif [clickurl] => http://www.bendigobank.com.au ) [15] => stdClass Object ( [bid] => 56 [name] => Logical Services [imageurl] => sp_logical.jpg [clickurl] => ) [16] => stdClass Object ( [bid] => 58 [name] => Dicount Lollie Shop [imageurl] => new dls logo.jpg [clickurl] => ) [17] => stdClass Object ( [bid] => 46 [name] => Patterson Securities [imageurl] => cmyk patersons_withtag.jpg [clickurl] => ) [18] => stdClass Object ( [bid] => 44 [name] => Mordialloc Personal Trainers [imageurl] => sp_mordipt.gif [clickurl] => # ) [19] => stdClass Object ( [bid] => 37 [name] => Mordialloc Cellar Door [imageurl] => sp_cellardoor.gif [clickurl] => ) [20] => stdClass Object ( [bid] => 41 [name] => Print House Graphics [imageurl] => sp_printhouse.gif [clickurl] => ) [21] => stdClass Object ( [bid] => 55 [name] => 360South [imageurl] => sp_360.jpg [clickurl] => ) [22] => stdClass Object ( [bid] => 43 [name] => Systema [imageurl] => sp_systema.gif [clickurl] => ) [23] => stdClass Object ( [bid] => 38 [name] => Lowe Financial Group [imageurl] => sp_lowe.gif [clickurl] => http://lowefinancial.com/ ) [24] => stdClass Object ( [bid] => 49 [name] => Kim Reed Conveyancing [imageurl] => sp_kimreed.jpg [clickurl] => ) [25] => stdClass Object ( [bid] => 45 [name] => Mordialloc Sporting Club [imageurl] => msc logo.jpg [clickurl] => ) ) Here's the php function which is meant to split the array: function split_array($array, $slices) { $perGroup = floor(count($array) / $slices); $Remainder = count($array) % $slices ; $slicesArray = array(); $i = 0; while( $i < $slices ) { $slicesArray[$i] = array_slice($array, $i * $perGroup, $perGroup); $i++; } if ( $i == $slices ) { if ($Remainder > 0 && $Remainder < $slices) { $z = $i * $perGroup +1; $x = 0; while ($x < $Remainder) { $slicesRemainderArray = array_slice($array, $z, $Remainder+$x); $remainderItems = array_merge($slicesArray[$x],$slicesRemainderArray); $slicesArray[$x] = $remainderItems; $x++; $z++; } } }; return $slicesArray; } Here's the result of the split (it somehow duplicates items from the original array into the smaller arrays): Array ( [0] => Array ( [0] => stdClass Object ( [bid] => 57 [name] => Southern Suburbs Physiotherapy Centre [imageurl] => sp_sspc.jpg [clickurl] => http://www.sspc.com.au ) [1] => stdClass Object ( [bid] => 35 [name] => Bendigo Bank Parkdale / Mentone East [imageurl] => sp_bendigo.gif [clickurl] => http://www.bendigobank.com.au ) [2] => stdClass Object ( [bid] => 38 [name] => Lowe Financial Group [imageurl] => sp_lowe.gif [clickurl] => http://lowefinancial.com/ ) [3] => stdClass Object ( [bid] => 39 [name] => Main Street Mordialloc [imageurl] => main street cafe.jpg [clickurl] => ) [4] => stdClass Object ( [bid] => 48 [name] => Beachside Osteo [imageurl] => sp_beachside.gif [clickurl] => http://www.beachsideosteo.com.au/ ) [5] => stdClass Object ( [bid] => 33 [name] => Two Brothers [imageurl] => sp_2brothers.gif [clickurl] => http://www.2brothers.com.au/ ) [6] => stdClass Object ( [bid] => 40 [name] => Ripponlea Mitsubishi [imageurl] => sp_mitsubishi.gif [clickurl] => ) ) [1] => Array ( [0] => stdClass Object ( [bid] => 44 [name] => Mordialloc Personal Trainers [imageurl] => sp_mordipt.gif [clickurl] => # ) [1] => stdClass Object ( [bid] => 41 [name] => Print House Graphics [imageurl] => sp_printhouse.gif [clickurl] => ) [2] => stdClass Object ( [bid] => 39 [name] => Main Street Mordialloc [imageurl] => main street cafe.jpg [clickurl] => ) [3] => stdClass Object ( [bid] => 48 [name] => Beachside Osteo [imageurl] => sp_beachside.gif [clickurl] => http://www.beachsideosteo.com.au/ ) [4] => stdClass Object ( [bid] => 33 [name] => Two Brothers [imageurl] => sp_2brothers.gif [clickurl] => http://www.2brothers.com.au/ ) [5] => stdClass Object ( [bid] => 40 [name] => Ripponlea Mitsubishi [imageurl] => sp_mitsubishi.gif [clickurl] => ) ) [2] => Array ( [0] => stdClass Object ( [bid] => 56 [name] => Logical Services [imageurl] => sp_logical.jpg [clickurl] => ) [1] => stdClass Object ( [bid] => 43 [name] => Systema [imageurl] => sp_systema.gif [clickurl] => ) [2] => stdClass Object ( [bid] => 48 [name] => Beachside Osteo [imageurl] => sp_beachside.gif [clickurl] => http://www.beachsideosteo.com.au/ ) [3] => stdClass Object ( [bid] => 33 [name] => Two Brothers [imageurl] => sp_2brothers.gif [clickurl] => http://www.2brothers.com.au/ ) [4] => stdClass Object ( [bid] => 40 [name] => Ripponlea Mitsubishi [imageurl] => sp_mitsubishi.gif [clickurl] => ) ) [3] => Array ( [0] => stdClass Object ( [bid] => 53 [name] => Carmotive [imageurl] => sp_carmotive.jpg [clickurl] => http://www.carmotive.com.au/ ) [1] => stdClass Object ( [bid] => 45 [name] => Mordialloc Sporting Club [imageurl] => msc logo.jpg [clickurl] => ) [2] => stdClass Object ( [bid] => 33 [name] => Two Brothers [imageurl] => sp_2brothers.gif [clickurl] => http://www.2brothers.com.au/ ) [3] => stdClass Object ( [bid] => 40 [name] => Ripponlea Mitsubishi [imageurl] => sp_mitsubishi.gif [clickurl] => ) ) [4] => Array ( [0] => stdClass Object ( [bid] => 59 [name] => Pure Sport [imageurl] => sp_psport.jpg [clickurl] => http://www.puresport.com.au/ ) [1] => stdClass Object ( [bid] => 54 [name] => PPM Builders [imageurl] => sp_ppm.jpg [clickurl] => http://www.hotfrog.com.au/Companies/P-P-M-Builders ) [2] => stdClass Object ( [bid] => 40 [name] => Ripponlea Mitsubishi [imageurl] => sp_mitsubishi.gif [clickurl] => ) ) [5] => Array ( [0] => stdClass Object ( [bid] => 46 [name] => Patterson Securities [imageurl] => cmyk patersons_withtag.jpg [clickurl] => ) [1] => stdClass Object ( [bid] => 34 [name] => Adriano's Pizza & Pasta [imageurl] => sp_adrian.gif [clickurl] => # ) ) [6] => Array ( [0] => stdClass Object ( [bid] => 55 [name] => 360South [imageurl] => sp_360.jpg [clickurl] => ) [1] => stdClass Object ( [bid] => 37 [name] => Mordialloc Cellar Door [imageurl] => sp_cellardoor.gif [clickurl] => ) ) [7] => Array ( [0] => stdClass Object ( [bid] => 49 [name] => Kim Reed Conveyancing [imageurl] => sp_kimreed.jpg [clickurl] => ) [1] => stdClass Object ( [bid] => 58 [name] => Dicount Lollie Shop [imageurl] => new dls logo.jpg [clickurl] => ) ) [8] => Array ( [0] => stdClass Object ( [bid] => 51 [name] => Richmond and Bennison [imageurl] => sp_richmond.jpg [clickurl] => http://www.richbenn.com.au/ ) [1] => stdClass Object ( [bid] => 52 [name] => Mordialloc Travel and Cruise [imageurl] => sp_morditravel.jpg [clickurl] => http://www.yellowpages.com.au/vic/mordialloc/mordialloc-travel-cruise-13492525-listing.html ) ) [9] => Array ( [0] => stdClass Object ( [bid] => 50 [name] => Letec [imageurl] => sp_letec.jpg [clickurl] => www.letec.biz ) [1] => stdClass Object ( [bid] => 36 [name] => Big River [imageurl] => sp_bigriver.gif [clickurl] => ) ) ) ^^ As you can see there are duplicates from the original array in the newly created smaller arrays. I thought I could remove the duplicates using a multi-dimensional remove duplicate function but that didn't work. I'm guessing my problem is in the array_split function. Any suggestions? :)

    Read the article

  • How to limit results in a SharePoint XSL query

    - by David
    Hello all, I am creating a SharePoint site that we will use to report issues with trucks used in our business. Linked to the list I have created will be a page that will display an overview of the trucks and a little truck icon will show the trucks current status. Green and the truck is okay (no open issues), Red and the truck have an open issue with status "Undrivable", Orange and there is two issues open that requires the user to look further into the truck before using it and finally a Gray truck for when there is a new issue created that has not been looked into (not sure if it is drivable or not). I have managed to create the "Dashboard" and with my limit XSL/XPATH knowledge been able to add a truck and replicate the description above but... in my test I have created 4 issues, for example if three of them are changed to status Closed and one left to Undrivable I will get four icons on the page, three with Green trucks and the last one Red. So in theory it works but I obviously only want to see the last truck, one truck. I am not interested in seeing the others. <xsl:template name="dvt_1.rowview"> <xsl:variable name="CountReport" select="count(/dsQueryResponse/Rows/Row[@Highloader='GGEU12' and @Status!='Closed'])" /> <xsl:variable name="MoreThan" select="$CountReport &gt; 1" /> <xsl:variable name="NoReports" select="$CountReport = 0" /> <xsl:variable name="Closed" select=" @Highloader='GGEU12' and @Status='Closed'" /> <xsl:choose> <xsl:when test="$MoreThan"> <div class="ms-vb"><img title='More than one report exist!' border='0' alt='In Progress' src='highloader/Library/hl-orange.png' /></div> </xsl:when> <xsl:otherwise> <div class="ms-vb"><xsl:value-of disable-output-escaping="yes" select="@Icon" /></div> </xsl:otherwise> </xsl:choose> </xsl:template> My hope is that someone with slightly more knowledge can find the last piece of the puzzle for me! Thanks for reading and asking questions to fill any gap I left above. David

    Read the article

  • itextsharp PdfCopy and landscape pages

    - by Andreas Rehm
    I'm using itextsharp to join mutiple pdf documents and add a footer. My code works fine - except for landscape pages - it isn't detecting the page rotation - the footer is not centerd for landscape: public static int AddPagesFromStream(Document document, PdfCopy pdfCopy, Stream m, bool addFooter, int detailPages, string footer, int footerPageNumOffset, int numPages, string pageLangString, string printLangString) { CreateFont(); try { m.Seek(0, SeekOrigin.Begin); var reader = new PdfReader(m); // get page count var pdfPages = reader.NumberOfPages; var i = 0; // add pages while (i < pdfPages) { i++; // import page with pdfcopy var page = pdfCopy.GetImportedPage(reader, i); // get page center float posX; float posY; var rotation = page.BoundingBox.Rotation; if (rotation == 0 || rotation == 180) { posX = page.Width / 2; posY = 0; } else { posX = page.Height / 2; posY = 20f; } var ps = pdfCopy.CreatePageStamp(page); var cb = ps.GetOverContent(); // add footer cb.SetColorFill(BaseColor.WHITE); var gs1 = new PdfGState {FillOpacity = 0.8f}; cb.SetGState(gs1); cb.Rectangle(0, 0, document.PageSize.Width, 46f + posY); cb.Fill(); // Text cb.SetColorFill(BaseColor.BLACK); cb.SetFontAndSize(baseFont, 7); cb.BeginText(); // create text var pages = string.Format(pageLangString, i + footerPageNumOffset, numPages); cb.ShowTextAligned(PdfContentByte.ALIGN_CENTER, printLangString, posX, 40f + posY, 0f); cb.ShowTextAligned(PdfContentByte.ALIGN_CENTER, footer, posX, 28f + posY, 0f); cb.ShowTextAligned(PdfContentByte.ALIGN_CENTER, pages, posX, 20f + posY, 0f); cb.EndText(); ps.AlterContents(); // add page to new pdf pdfCopy.AddPage(page); } // close PdfReader reader.Close(); // return number of pages return i; } catch (Exception e) { Console.WriteLine(e); return 0; } } How do I detect the page rotation (e.g. landscape) format in this case? The given example works for PdfReader but not for PdfCopy. Edit: Why do I need PdfCopy? I tried copying a word pdf export. Some word hyperlinks will not work when you try to copy pages with PdfReader. Only PdfCopy transfers all needed page informations. Edit: (SOLVED) You need to use reader.GetPageRotation(i);

    Read the article

  • Optimize date query for large child tables: GiST or GIN?

    - by Dave Jarvis
    Problem 72 child tables, each having a year index and a station index, are defined as follows: CREATE TABLE climate.measurement_12_013 ( -- Inherited from table climate.measurement_12_013: id bigint NOT NULL DEFAULT nextval('climate.measurement_id_seq'::regclass), -- Inherited from table climate.measurement_12_013: station_id integer NOT NULL, -- Inherited from table climate.measurement_12_013: taken date NOT NULL, -- Inherited from table climate.measurement_12_013: amount numeric(8,2) NOT NULL, -- Inherited from table climate.measurement_12_013: category_id smallint NOT NULL, -- Inherited from table climate.measurement_12_013: flag character varying(1) NOT NULL DEFAULT ' '::character varying, CONSTRAINT measurement_12_013_category_id_check CHECK (category_id = 7), CONSTRAINT measurement_12_013_taken_check CHECK (date_part('month'::text, taken)::integer = 12) ) INHERITS (climate.measurement) CREATE INDEX measurement_12_013_s_idx ON climate.measurement_12_013 USING btree (station_id); CREATE INDEX measurement_12_013_y_idx ON climate.measurement_12_013 USING btree (date_part('year'::text, taken)); (Foreign key constraints to be added later.) The following query runs abysmally slow due to a full table scan: SELECT count(1) AS measurements, avg(m.amount) AS amount FROM climate.measurement m WHERE m.station_id IN ( SELECT s.id FROM climate.station s, climate.city c WHERE -- For one city ... -- c.id = 5182 AND -- Where stations are within an elevation range ... -- s.elevation BETWEEN 0 AND 3000 AND 6371.009 * SQRT( POW(RADIANS(c.latitude_decimal - s.latitude_decimal), 2) + (COS(RADIANS(c.latitude_decimal + s.latitude_decimal) / 2) * POW(RADIANS(c.longitude_decimal - s.longitude_decimal), 2)) ) <= 50 ) AND -- -- Begin extracting the data from the database. -- -- The data before 1900 is shaky; insufficient after 2009. -- extract( YEAR FROM m.taken ) BETWEEN 1900 AND 2009 AND -- Whittled down by category ... -- m.category_id = 1 AND m.taken BETWEEN -- Start date. (extract( YEAR FROM m.taken )||'-01-01')::date AND -- End date. Calculated by checking to see if the end date wraps -- into the next year. If it does, then add 1 to the current year. -- (cast(extract( YEAR FROM m.taken ) + greatest( -1 * sign( (extract( YEAR FROM m.taken )||'-12-31')::date - (extract( YEAR FROM m.taken )||'-01-01')::date ), 0 ) AS text)||'-12-31')::date GROUP BY extract( YEAR FROM m.taken ) The sluggishness comes from this part of the query: m.taken BETWEEN /* Start date. */ (extract( YEAR FROM m.taken )||'-01-01')::date AND /* End date. Calculated by checking to see if the end date wraps into the next year. If it does, then add 1 to the current year. */ (cast(extract( YEAR FROM m.taken ) + greatest( -1 * sign( (extract( YEAR FROM m.taken )||'-12-31')::date - (extract( YEAR FROM m.taken )||'-01-01')::date ), 0 ) AS text)||'-12-31')::date The HashAggregate from the plan shows a cost of 10006220141.11, which is, I suspect, on the astronomically huge side. There is a full table scan on the measurement table (itself having neither data nor indexes) being performed. The table aggregates 237 million rows from its child tables. Question What is the proper way to index the dates to avoid full table scans? Options I have considered: GIN GiST Rewrite the WHERE clause Separate year_taken, month_taken, and day_taken columns to the tables What are your thoughts? Thank you!

    Read the article

  • Chaining animations and memory management

    - by bryan1967
    Hey Everyone, Got a question. I have a subclassed UIView that is acting as my background where I am scrolling the ground. The code is working really nice and according to the Instrumentation, I am not leaking nor is my created and still living Object allocation growing. I have discovered else where in my application that adding an animation to a UIImageView that is owned by my subclassed UIView seems to bump up my retain count and removing all animations when I am done drops it back down. My question is this, when you add an animation to a layer with a key, I am assuming that if there is already a used animation in that entry position in the backing dictionary that it is released and goes into the autorelease pool? For example: - (void)animationDidStop:(CAAnimation *)theAnimation finished:(BOOL)flag { NSString *keyValue = [theAnimation valueForKey:@"name"]; if ( [keyValue isEqual:@"step1"] && flag ) { groundImageView2.layer.position = endPos; CABasicAnimation *position = [CABasicAnimation animationWithKeyPath:@"position"]; position.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionLinear]; position.toValue = [NSValue valueWithCGPoint:midEndPos]; position.duration = (kGroundSpeed/3.8); position.fillMode = kCAFillModeForwards; [position setDelegate:self]; [position setRemovedOnCompletion:NO]; [position setValue:@"step2-1" forKey:@"name"]; [groundImageView2.layer addAnimation:position forKey:@"positionAnimation"]; groundImageView1.layer.position = startPos; position = [CABasicAnimation animationWithKeyPath:@"position"]; position.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionLinear]; position.toValue = [NSValue valueWithCGPoint:midStartPos]; position.duration = (kGroundSpeed/3.8); position.fillMode = kCAFillModeForwards; [position setDelegate:self]; [position setRemovedOnCompletion:NO]; [position setValue:@"step2-2" forKey:@"name"]; [groundImageView1.layer addAnimation:position forKey:@"positionAnimation"]; } else if ( [keyValue isEqual:@"step2-2"] && flag ) { groundImageView1.layer.position = midStartPos; CABasicAnimation *position = [CABasicAnimation animationWithKeyPath:@"position"]; position.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionLinear]; position.toValue = [NSValue valueWithCGPoint:endPos]; position.duration = 12; position.fillMode = kCAFillModeForwards; [position setDelegate:self]; [position setRemovedOnCompletion:NO]; [position setValue:@"step1" forKey:@"name"]; [groundImageView1.layer addAnimation:position forKey:@"positionAnimation"]; } } This chains animations infinitely, and as I said one it is running the created and living object allocation doesn't change. I am assuming everytime I add an animation the one that exists in that key position is released. Just wondering I am correct. Also, I am relatively new to Core Animation. I tried to play around with re-using the animations but got a little impatient. Is it possible to reuse animations? Thanks! Bryan

    Read the article

  • Strange results - I obtain same value for all keys

    - by Pietro Luciani
    I have a problem with mapreduce. Giving as input a list of song ("Songname"#"UserID"#"boolean") i must have as result a song list in which is specified how many time different useres listen them... so a output ("Songname","timelistening"). I used hashtable to allow only one couple . With short files it works well but when I put as input a list about 1000000 of records it returns me the same value (20) for all records. This is my mapper: public static class CanzoniMapper extends Mapper<Object, Text, Text, IntWritable>{ private IntWritable userID = new IntWritable(0); private Text song = new Text(); public void map(Object key, Text value, Context context) throws IOException, InterruptedException { /*StringTokenizer itr = new StringTokenizer(value.toString()); while (itr.hasMoreTokens()) { word.set(itr.nextToken()); context.write(word, one); }*/ String[] caratteri = value.toString().split("#"); if(caratteri[2].equals("1")){ song.set(caratteri[0]); userID.set(Integer.parseInt(caratteri[1])); context.write(song,userID); } } } This is my reducer: public static class CanzoniReducer extends Reducer<Text,IntWritable,Text,IntWritable> { private IntWritable result = new IntWritable(); public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { Hashtable<IntWritable,Text> doppioni = new Hashtable<IntWritable,Text>(); for (IntWritable val : values) { doppioni.put(val,key); } result.set(doppioni.size()); //doppioni.clear(); context.write(key,result); } } and main: Configuration conf = new Configuration(); Job job = new Job(conf, "word count"); job.setJarByClass(Canzoni.class); job.setMapperClass(CanzoniMapper.class); //job.setCombinerClass(CanzoniReducer.class); //job.setNumReduceTasks(2); job.setReducerClass(CanzoniReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); System.exit(job.waitForCompletion(true) ? 0 : 1); Any idea???

    Read the article

  • Parallel.For maintain input list order on output list

    - by romeozor
    I'd like some input on keeping the order of a list during heavy-duty operations that I decided to try to do in a parallel manner to see if it boosts performance. (It did!) I came up with a solution, but since this was my first attempt at anything parallel, I'd need someone to slap my hands if I did something very stupid. There's a query that returns a list of card owners, sorted by name, then by date of birth. This needs to be rendered in a table on a web page (ASP.Net WebForms). The original coder decided he would construct the table cell-by-cell (TableCell), add them to rows (TableRow), then each row to the table. So no GridView, allegedly its performance is bad, but the performance was very poor regardless :). The database query returns in no time, the most time is spent on looping through the results and adding table cells etc. I made the following method to maintain the original order of the list: private TableRow[] ComposeRows(List<CardHolder> queryResult) { int queryElementsCount = queryResult.Count(); // array with the query's size var rowArray = new TableRow[queryElementsCount]; Parallel.For(0, queryElementsCount, i => { var row = new TableRow(); var cell = new TableCell(); // various operations, including simple ones such as: cell.Text = queryResult[i].Name; row.Cells.Add(cell); // here I'm adding the current item to it's original index // to maintain order in the output list rowArray[i] = row; }); return rowArray; } So as you can see, because I'm returning a very different type of data (List<CardHolder> -> TableRow[]), I can't just simply omit the ordering from the original query to do it after the operations. Also, I also thought it would be a good idea to Dispose() the objects at the end of each loop, because the query can return a huge list and letting cell and row objects pile up in the heap could impact performance.(?) How badly did I do? Does anyone have a better solution in case mine is flawed?

    Read the article

  • Working through exercises in "Cocoa Programming for Mac OS X" - I'm stumped

    - by Zigrivers
    I've been working through the exercises in a book recommended here on stackoverflow, however I've run into a problem and after three days of banging my head on the wall, I think I need some help. I'm working through the "Speakline" exercise where we add a TableView to the interface and the table will display the "voices" that you can choose for the text to speech aspect of the program. I am having two problems that I can't seem to get to the bottom of: I get the following error: * Illegal NSTableView data source (). Must implement numberOfRowsInTableView: and tableView:objectValueForTableColumn:row: The tableView that is supposed to display the voices comes up blank I have a feeling that both of these problems are related. I'm including my interface code here: #import <Cocoa/Cocoa.h> @interface AppController : NSObject <NSSpeechSynthesizerDelegate, NSTableViewDelegate> { IBOutlet NSTextField *textField; NSSpeechSynthesizer *speechSynth; IBOutlet NSButton *stopButton; IBOutlet NSButton *startButton; IBOutlet NSTableView *tableView; NSArray *voiceList; } - (IBAction)sayIt:(id)sender; - (IBAction)stopIt:(id)sender; @end And my implementation code here: #import "AppController.h" @implementation AppController - (id)init { [super init]; //Log to help me understand what is happening NSLog(@"init"); speechSynth = [[NSSpeechSynthesizer alloc] initWithVoice:nil]; [speechSynth setDelegate:self]; voiceList = [[NSSpeechSynthesizer availableVoices] retain]; return self; } - (IBAction)sayIt:(id)sender { NSString *string = [[textField stringValue] stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceCharacterSet]]; //Is the string zero-length? if([string length] == 0) { NSLog(@"String from %@ is a string with a length of %d.", textField, [string length]); [speechSynth startSpeakingString:@"Please enter a phrase first."]; } [speechSynth startSpeakingString:string]; NSLog(@"Started to say: %@", string); [stopButton setEnabled:YES]; [startButton setEnabled:NO]; } - (IBAction)stopIt:(id)sender { NSLog(@"Stopping..."); [speechSynth stopSpeaking]; } - (void) speechSynthesizer:(NSSpeechSynthesizer *)sender didFinishSpeaking:(BOOL)complete { NSLog(@"Complete = %d", complete); [stopButton setEnabled:NO]; [startButton setEnabled:YES]; } - (NSInteger)numberOfRowsInTableView:(NSTableView *)aTableView { return [voiceList count]; } - (id)tableView: (NSTableView *)tv objecValueForTableColumn: (NSTableColumn *)tableColumn row:(NSInteger)row { NSString *v = [voiceList objectAtIndex:row]; NSLog(@"v = %@",v); NSDictionary *dict = [NSSpeechSynthesizer attributesForVoice:v]; return [dict objectForKey:NSVoiceName]; } /* - (BOOL)respondsToSelector:(SEL)aSelector { NSString *methodName = NSStringFromSelector(aSelector); NSLog(@"respondsToSelector: %@", methodName); return [super respondsToSelector:aSelector]; } */ @end Hopefully, you guys can see something obvious that I've missed. Thank you!

    Read the article

  • Counting Alphabetic Characters That Are Contained in an Array with C

    - by Craig
    Hello everyone, I am having trouble with a homework question that I've been working at for quite some time. I don't know exactly why the question is asking and need some clarification on that and also a push in the right direction. Here is the question: (2) Solve this problem using one single subscripted array of counters. The program uses an array of characters defined using the C initialization feature. The program counts the number of each of the alphabetic characters a to z (only lower case characters are counted) and prints a report (in a neat table) of the number of occurrences of each lower case character found. Only print the counts for the letters that occur at least once. That is do not print a count if it is zero. DO NOT use a switch statement in your solution. NOTE: if x is of type char, x-‘a’ is the difference between the ASCII codes for the character in x and the character ‘a’. For example if x holds the character ‘c’ then x-‘a’ has the value 2, while if x holds the character ‘d’, then x-‘a’ has the value 3. Provide test results using the following string: “This is an example of text for exercise (2).” And here is my source code so far: #include<stdio.h> int main() { char c[] = "This is an example of text for exercise (2)."; char d[26]; int i; int j = 0; int k; j = 0; //char s = 97; for(i = 0; i < sizeof(c); i++) { for(s = 'a'; s < 'z'; s++){ if( c[i] == s){ k++; printf("%c,%d\n", s, k); k = 0; } } } return 0; } As you can see, my current solution is a little anemic. Thanks for the help, and I know everyone on the net doesn't necessarily like helping with other people's homework. ;P

    Read the article

  • SQL Server 2005 - query with case statement

    - by user329266
    Trying to put a single query together to be used eventually in a SQL Server 2005 report. I need to: Pull in all distinct records for values in the "eventid" column for a time frame - this seems to work. For each eventid referenced above, I need to search for all instances of the same eventid to see if there is another record with TaskName like 'review1%'. Again, this seems to work. This is where things get complicated: For each record where TaskName is like review1, I need to see if another record exists with the same eventid and where TaskName='End'. Utimately, I need a count of how many records have TaskName like 'review1%', and then how many have TaskName like 'review1%' AND TaskName='End'. I would think this could be accomplished by setting a new value for each record, and for the eventid, if a record exists with TaskName='End', set to 1, and if not, set to 0. The query below seems to accomplish item #1 above: SELECT eventid, TimeStamp, TaskName, filepath FROM (SELECT eventid, TimeStamp, filepath, TaskName, ROW_NUMBER() OVER(PARTITION BY eventid ORDER BY TimeStamp DESC) AS seq FROM eventrecords where ((TimeStamp >= '2010-4-1 00:00:00.000') and (TimeStamp <= '2010-4-21 00:00:00.000'))) AS T WHERE seq = 1 order by eventid And the query below seems to accomplish #2: SELECT eventid, TimeStamp, TaskName, filepath FROM (SELECT eventid, TimeStamp, filepath, TaskName, ROW_NUMBER() OVER(PARTITION BY eventid ORDER BY TimeStamp DESC) AS seq FROM eventrecords where ((TimeStamp >= '2010-4-1 00:00:00.000') and (TimeStamp <= '2010-4-21 00:00:00.000')) and TaskName like 'Review1%') AS T WHERE seq = 1 order by eventid This will bring back the eventid's that also have a TaskName='End': SELECT eventid, TimeStamp, TaskName, filepath FROM (SELECT eventid, TimeStamp, filepath, TaskName, ROW_NUMBER() OVER(PARTITION BY eventid ORDER BY TimeStamp DESC) AS seq FROM eventrecords where ((TimeStamp >= '2010-4-1 00:00:00.000') and (TimeStamp <= '2010-4-21 00:00:00.000')) and TaskName like 'Review1%') AS T WHERE seq = 1 and eventid in (Select eventid from eventrecords where TaskName = 'End') order by eventid So I've tried the following to TRY to accomplish #3: SELECT eventid, TimeStamp, TaskName, filepath FROM (SELECT eventid, TimeStamp, filepath, TaskName, ROW_NUMBER() OVER(PARTITION BY eventid ORDER BY TimeStamp DESC) AS seq FROM eventrecords where ((TimeStamp >= '2010-4-1 00:00:00.000') and (TimeStamp <= '2010-4-21 00:00:00.000')) and TaskName like 'Review1%') AS T WHERE seq = 1 and case when (eventid in (Select eventid from eventrecords where TaskName = 'End') then 1 else 0) as bit end order by eventid When I try to run this, I get: "Incorrect syntax near the keyword 'then'." Not sure what I'm doing wrong. Haven't seen any examples anywhere quite like this. I should mention that eventrecords has a primary key, but it doesn't seem to help anything when I include it, and I am not permitted to change the table. (ugh) I've received one suggestion to use a cursor and temporary table, but am not sure how badley that would bog down performance when the report is running. Thanks in advance.

    Read the article

  • Function Point Analysis -- a seriously over-estimating technique?

    - by kizzx2
    I know questions about FPA has been asked numerous times before, but this time I'm taking a more analytical angle at it, backed up with data. 1. First, some data This question is based on a tutorial. He had a "Sample Count" section where he demonstrated it step by step. You can see some screenshots of his sample application here. In the end, he calculated the unadjusted FP to be 99. There is another article on InformIT with industry data on typical hour/FP. It ranges from 2 hours/FP to 27.4 hours/FP. Let's try to stick with 2 for the moment (since SO readers are probably the more efficient crowd :p). 2. Reality check!? Now just check out the screenshots again. Do a little math here 99 * 2 = 198 hours 198 hours / 40 hours per week = 5 weeks Seriously? That sample application is going to take 5 weeks to implement? Is it just my feeling that it wouldn't take any decent programmer longer than one week to have it completed? Now let's try estimating the cost of the project. We'll use New York's minimum wage at the moment (Wikipedia), which is $7.25 198 * 7.25 = $1435.5 From what I could see from the screenshots, this application is a small excel-improvement app. I could have bought MS Office Pro for 200 bucks which gives me greater interoperability (.xls files) and flexibility (spreadsheets). (For the record, that same Web site has another article discussing productivity. It seems like they typically use 4.2 hours/FP, which gives us even more shocking stats: 99 * 4.2 = 415 hours = 10 weeks = almost 3 whopping months! 415 hours * $7.25 = $3000 zomg (That's even assuming that all our poor coders get the minimum wage!) 3. Am I missing something here? Right now, I could come up with several possible explanation: FPA is really only suited for bigger projects (1000+ FPs) so it becomes extremely inaccurate at smaller scale. The hours/FP metric fluctuates abruptly from team to team, project to project. For a small project like this, we could have used something like 0.5 hour/FP or something. (Now this kind of makes the whole estimation thing pointless, unless my firm does the same type of projects for several years with the same team, not really common.) From my experience with several software metrics, Function Point is really not a lightweight metric. If the hour/FP thing fluctuates so much, then what's the point, maybe I could have gone with User Story Points which is a lot faster to get and arguably almost as uncertain. What would be the FP experts' answers to this?

    Read the article

  • bmp image header doubts

    - by vikramtheone
    Hi Guys, I'm doing a project where I have to make use of the pixel information of a bmp image. So, I'm gathering the image information by reading the header information of the input .bmp image. I'm quite successful with everything but one thing bothers me, can any one here clarify it? The header information of my .bmp image is as follows (My test image is very tiny and gray scale)- BMP File header File size 1210 Offset information 1078 BMP Information header Image Header Size 40 Image Size 132 Image width 9 Image height 11 Image bits_p_p 8 So, from the .bmp header I see that the image size is 132 (bytes) but when I multiply the width and height it is only 99, how is such a thing possible? I'm confident with 132 bytes because when I subtract the Offset value with the File Size value, I get 132(1210 - 1078 = 132) and also when I manually count the number of bytes (In a HEX editor) from the point 1078 or 436h (End of the offset field), there are exactly 132 bytes of pixel information. So, why is there a disparity between the size filed and the (width x height)? My future implementations are dependent on the image width and height information and not on Image size information. So, I have to understand thoroughly whats going on here. My understanding of the header should be clearly wrong... I guess!!! Help!!! Regards Vikram My bmp structures are a as follows - typedef struct bmpfile_magic { short magic; }BMP_MAGIC_NUMBER; typedef struct bmpfile_header { uint32_t filesz; uint16_t creator1; uint16_t creator2; uint32_t bmp_offset; }BMP_FILE_HEADER; typedef struct { uint32_t header_sz; uint32_t width; uint32_t height; uint16_t nplanes; uint16_t bitspp; uint32_t compress_type; uint32_t bmp_bytesz; uint32_t hres; uint32_t vres; uint32_t ncolors; uint32_t nimpcolors; } BMP_INFO_HEADER;

    Read the article

  • Optimising speeds in HDF5 using Pytables

    - by Sree Aurovindh
    The problem is with respect to the writing speed of the computer (10 * 32 bit machine) and the postgresql query performance.I will explain the scenario in detail. I have data about 80 Gb (along with approprite database indexes in place). I am trying to read it from Postgresql database and writing it into HDF5 using Pytables.I have 1 table and 5 variable arrays in one hdf5 file.The implementation of Hdf5 is not multithreaded or enabled for symmetric multi processing.I have rented about 10 computers for a day and trying to write them inorder to speed up my data handling. As for as the postgresql table is concerned the overall record size is 140 million and I have 5 primary- foreign key referring tables.I am not using joins as it is not scalable So for a single lookup i do 6 lookup without joins and write them into hdf5 format. For each lookup i do 6 inserts into each of the table and its corresponding arrays. The queries are really simple select * from x.train where tr_id=1 (primary key & indexed) select q_t from x.qt where q_id=2 (non-primary key but indexed) (similarly five queries) Each computer writes two hdf5 files and hence the total count comes around 20 files. Some Calculations and statistics: Total number of records : 14,37,00,000 Total number of records per file : 143700000/20 =71,85,000 The total number of records in each file : 71,85,000 * 5 = 3,59,25,000 Current Postgresql database config : My current Machine : 8GB RAM with i7 2nd generation Processor. I made changes to the following to postgresql configuration file : shared_buffers : 2 GB effective_cache_size : 4 GB Note on current performance: I have run it for about ten hours and the performance is as follows: The total number of records written for each file is about 6,21,000 * 5 = 31,05,000 The bottle neck is that i can only rent it for 10 hours per day (overnight) and if it processes in this speed it will take about 11 days which is too high for my experiments. Please suggest me on how to improve. Questions: 1. Should i use Symmetric multi processing on those desktops(it has 2 cores with about 2 GB of RAM).In that case what is suggested or prefereable? 2. If i change my postgresql configuration file and increase the RAM will it enhance my process. 3. Should i use multi threading.. In that case any links or pointers would be of great help Thanks Sree aurovindh V

    Read the article

  • binary files writing/reading problems...

    - by ScReYm0
    Ok i have problem with my code for reading binary file... First i will show you my writing code: void book_saving(char *file_name, struct BOOK *current) { FILE *out; BOOK buf; out = fopen(file_name, "wb"); if(out != NULL) { printf_s("Writting to file..."); do { if(current != NULL) { strcpy(buf.catalog_number, current-catalog_number); strcpy(buf.author, current-author); buf.price = current-price; strcpy(buf.publisher, current-publisher); strcpy(buf.title, current-title); buf.price = current-year_published; fwrite(&buf, sizeof(BOOK), 1, out); } current = current-next; }while(current != NULL); printf_s("Done!\n"); fclose(out); } } and here is my "version" for reading: int book_open(struct BOOK *current, char *file_name) { FILE *in; BOOK buf; BOOK *vnext; int count; int i; in = fopen("west", "rb"); printf_s("Reading database from %s...", file_name); if(!in) { printf_s("\nERROR!"); return 1; } i = fread(&buf,sizeof(BOOK), 1, in); while(!feof(in)) { if(current != NULL) { current = malloc(sizeof(BOOK)); current-next = NULL; } strcpy(current-catalog_number, buf.catalog_number); strcpy(current-title, buf.title); strcpy(current-publisher, buf.publisher); current-price = buf.price; current-year_published = buf.year_published; fread(&buf, 1, sizeof(BOOK), in); while(current-next != NULL) current = current-next; fclose(in); } printf_s("Done!"); return 0; } I just need to save my linked list in binary file and to be able to read it back ... please help me. The program just don't read it or its crash every time different situation ...

    Read the article

  • Is READ UNCOMMITTED / NOLOCK safe in this situation?

    - by Ben Challenor
    I know that snapshot isolation would fix this problem, but I'm wondering if NOLOCK is safe in this specific case so that I can avoid the overhead. I have a table that looks something like this: drop table Data create table Data ( Id BIGINT NOT NULL, Date BIGINT NOT NULL, Value BIGINT, constraint Cx primary key (Date, Id) ) create nonclustered index Ix on Data (Id, Date) There are no updates to the table, ever. Deletes can occur but they should never contend with the SELECT because they affect the other, older end of the table. Inserts are regular and page splits to the (Id, Date) index are extremely common. I have a deadlock situation between a standard INSERT and a SELECT that looks like this: select top 1 Date, Value from Data where Id = @p0 order by Date desc because the INSERT acquires a lock on Cx (Date, Id; Value) and then Ix (Id, Date), but the SELECT acquires a lock on Ix (Id, Date) and then Cx (Date, Id; Value). This is because the SELECT first seeks on Ix and then joins to a seek on Cx. Swapping the clustered and non-clustered index would break this cycle, but it is not an acceptable solution because it would introduce cycles with other (more complex) SELECTs. If I add NOLOCK to the SELECT, can it go wrong in this case? Can it return: More than one row, even though I asked for TOP 1? No rows, even though one exists and has been committed? Worst of all, a row that doesn't satisfy the WHERE clause? I've done a lot of reading about this online, but the only reproductions of over- or under-count anomalies I've seen (one, two) involve a scan. This involves only seeks. Jeff Atwood has a post about using NOLOCK that generated a good discussion. I was particularly interested in a comment by Rick Townsend: Secondly, if you read dirty data, the risk you run is of reading the entirely wrong row. For example, if your select reads an index to find your row, then the update changes the location of the rows (e.g.: due to a page split or an update to the clustered index), when your select goes to read the actual data row, it's either no longer there, or a different row altogether! Is this possible with inserts only, and no updates? If so, then I guess even my seeks on an insert-only table could be dangerous. Update: I'm trying to figure out how snapshot isolation works. It seems to be row-based, where transactions read the table (with no shared lock!), find the row they are interested in, and then see if they need to get an old version of the row from the version store in tempdb. But in my case, no row will have more than one version, so the version store seems rather pointless. And if the row was found with no shared lock, how is it different to just using NOLOCK?

    Read the article

  • c#: Design advice. Using DataTable or List<MyObject> for a generic rule checker

    - by Andrew White
    Hi, I have about 100,000 lines of generic data. Columns/Properties of this data are user definable and are of the usual data types (string, int, double, date). There will be about 50 columns/properties. I have 2 needs: To be able to calculate new columns/properties using an expression e.g. Column3 = Column1 * Column2. Ultimately I would like to be able to use external data using a callback, e.g. Column3 = Column1 * GetTemperature The expression is relatively simple, maths operations, sum, count & IF are the only necessary functions. To be able to filter/group the data and perform aggregations e.g. Sum(Data.Column1) Where(Data.Column2 == "blah") As far as I can see I have two options: 1. Using a DataTable. = Point 1 above is achieved by using DataColumn.Expression = Point 2 above is acheived by using DataTable.DefaultView.RowFilter & C# code 2. Using a List of generic Objects each with a Dictionary< string, object to store the values. = Point 1 could be achieved by something like NCalc = Point 2 is achieved using LINQ DataTable: Pros: DataColumn.Expression is inbuilt Cons: RowFilter & coding c# is not as "nice" as LINQ, DataColumn.Expression does not support callbacks(?) = workaround could be to get & replace external value when creating the calculated column GenericList: Pros: LINQ syntax, NCalc supports callbacks Cons: Implementing NCalc/generic calc engine Based on the above I would think a GenericList approach would win, but something I have not factored in is the performance which for some reason I think would be better with a datatable. Does anyone have a gut feeling / experience with LINQ vs. DataTable performance? How about NCalc? As I said there are about 100,000 rows of data, with 50 columns, of which maybe 20 are calculated. In total about 50 rules will be run against the data, so in total there will be 5 million row/object scans. Would really appreciate any insights. Thx. ps. Of course using a database + SQL & Views etc. would be the easiest solution, but for various reasons can't be implemented.

    Read the article

  • dynamiclly schedule a lead sales agent

    - by Josh
    I have a website that I'm trying to migrate from classic asp to asp.net. It had a lead schedule, where each sales agent would be featured for the current day, or part of the day.The next day a new agent would be scheduled. It was driven off a database table that had a row for each day in it. So to figure out if a sales agent would show on a day, it was easy, just find today's date in the table. Problem was it ran out rows, and you had to run a script to update the lead days 6 months at a time. Plus if there was ever any change to the schedule, you had to delete all the rows and re-run the script. So I'm trying to code it where sql server figures that out for me, and no script has to be ran. I have a table like so CREATE TABLE [dbo].[LeadSchedule]( [leadid] [int] IDENTITY(1,1) NOT NULL, [userid] [int] NOT NULL, [sunday] [bit] NOT NULL, [monday] [bit] NOT NULL, [tuesday] [bit] NOT NULL, [wednesday] [bit] NOT NULL, [thursday] [bit] NOT NULL, [friday] [bit] NOT NULL, [saturday] [bit] NOT NULL, [StartDate] [smalldatetime] NULL, [EndDate] [smalldatetime] NULL, [StartTime] [time](0) NULL, [EndTime] [time](0) NULL, [order] [int] NULL, So the user can schedule a sales agent depending on their work schedule. Also if they wanted to they could split certain days, or sales agents by time, So from Midnight to 4 it was one agent, from 4-midnight it was another. So far I've tried using a numbers table, row numbers, goofy date math, and I'm at a loss. Any suggestions on how to handle this purely from sql code? If it helps, the table should always be small, like less than 20 never over 100. update After a few hours all I've managed to come up with is the below. It doesn't handle filling in days not available or times, just rotates through all the sales agents with leadTable as ( select leadid,userid,[order],StartDate, case DATEPART(dw,getdate()) when 1 then sunday when 2 then monday when 3 then tuesday when 4 then wednesday when 5 then thursday when 6 then friday when 7 then saturday end as DayAvailable , ROW_NUMBER() OVER (ORDER BY [order] ASC) AS ROWID from LeadSchedule where GETDATE()>=StartDate and (CONVERT(time(0),GETDATE())>= StartTime or StartTime is null) and (CONVERT(time(0),GETDATE())<= EndTime or EndTime is null) ) select userid, DATEADD(d,(number+ROWID-2)*totalUsers,startdate ) leadday from (select *, (select COUNT(1) from leadTable) totalUsers from leadTable inner join Numbers on 1=1 where DayAvailable =1 ) tb1 order by leadday asc

    Read the article

  • iPhone App is leaking memory; Instruments and Clang cannot find the leak

    - by Norbert
    Hi, i've developed an iPhone program which is kind of an image manipulation program: The user get an UIImagePickerController and selects an image. Then the program does some heavy calculating in a new thread (for responsiveness of the application). The thread has, of course, its own autorelease pool. When calculation is done, the seperated thread signals the main thread that the result can be presented. The app creates a new view controller, pushes it onto the navigation controller. In short: UIImagePickerController new thread (autorelease pool) does some heavy calculation with image data signal to main thread that it's done main thread creates view controller and pushes it onto navigation controller view controller presents image result My program works well, but if I dismiss the navigation controller's top view controller by tapping on the back button and repeat the whole process several times, my app crashes. But only on the device! Instruments cannot find any leaks (except for some minor ones which I don't feel responsible for: thread creation, NSCFString; overall about 10 kB). Even Clang static analyzer tells me that my could seems to be all right. I know that the UIImage class can cache images and objects returned from convenience methods get freed only whet their autorelease pool gets drained. But most of the time I work with CGImageRef and I use UIImage' alloc, init & release methods to free memory as soon as possible. Currently, I don't know how to isolate the problem. How would you approach this problem? Crash Log: Incident Identifier: F4C202C9-1338-48FC-80AD-46248E6C7154 CrashReporter Key: bb6f526d8b9bb680f25ea8e93bb071566ccf1776 OS Version: iPhone OS 3.1.1 (7C145) Date: 2009-09-26 14:18:57 +0200 Free pages: 372 Wired pages: 7754 Purgeable pages: 0 Largest process: _MY_APP_ Processes Name UUID Count resident pages _MY_APP_ <032690e5a9b396058418d183480a9ab3> 17766 (jettisoned) (active) debugserver <ec29691560aa0e2994f82f822181bffd> 107 syslog_relay <21e13fa2b777218bdb93982e23fb65d3> 62 notification_pro <8a7725017106a28b545fd13ed58bf98c> 64 notification_pro <8a7725017106a28b545fd13ed58bf98c> 64 afcd <98b45027fbb1350977bf1ca313dee527> 65 mediaserverd <eb8fe997a752407bea573cd3adf568d3> 319 ptpd <b17af9cf6c4ad16a557d6377378e8a1e> 142 syslogd <ec8a5bc4483638539fa1266363dee8b8> 68 BTServer <1bb74831f93b1d07c48fb46cc31c15da> 119 apsd <a639ba83e666cc1d539223923ce59581> 165 notifyd <2ed3a1166da84d8d8868e64d549cae9d> 101 CommCenter <f4239480a623fb1c35fa6c725f75b166> 161 SpringBoard <8919df8091fdfab94d9ae05f513c0ce5> 2681 (active) accessoryd <b66bcf6e77c3ee740c6a017f54226200> 90 configd <41e9d763e71dc0eda19b0afec1daee1d> 275 fairplayd <cdce5393153c3d69d23c05de1d492bd4> 108 mDNSResponder <f3ef7a6b24d4f203ed147f476385ec53> 103 lockdownd <6543492543ad16ff0707a46e512944ff> 297 launchd <73ce695fee09fc37dd70b1378af1c818> 71 **End**

    Read the article

< Previous Page | 301 302 303 304 305 306 307 308 309 310 311 312  | Next Page >