Search Results

Search found 664 results on 27 pages for 'reducing'.

Page 3/27 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Reducing size of a character array in Numpy

    - by Morgoth
    Given a character array: In [21]: x = np.array(['a ','bb ','cccc ']) One can remove the whitespace using: In [22]: np.char.strip(x) Out[22]: array(['a', 'bb', 'cccc'], dtype='|S8') but is there a way to also shrink the width of the column to the minimum required size, in the above case |S4?

    Read the article

  • Reducing moire when downsampling halftone comic images.

    - by drawnonward
    How can I reduce moire effects when downsampling halftone comic book images during live zoom on an iPhone or iPad? I am writing a comic book viewer. It would be nice to provide higher resolution images and allow the user to zoom in while reading the comic book. However, my client is averse to moire effects and will not allow this feature if there are noticeable moire artifacts while zooming, which of course there are. Modifying the images to be less susceptible to moire would only work if the modifications were not perceptible. Blur was specifically prohibited, as is anything that removes the beloved halftone dots. The images are black and white halftone and line art. The originals are 600 dpi but what we ship with the application will be half that at best, so probably 2500 pixels or less tall. So what are my options? If I write a custom downsampling algorithm would it be fast enough for real time on these devices? Are there other tricks I can do? Would it work to just avoid the size ratios that have the most visual moire effects? As you zoom in an out, there are definitely peaks where the moire effects are worst. Is there a way to calculate what those points are and just zoom to a nearby scale that is not as bad? Any suggestions are welcome. I have very little experience with image and signal processing, but am enjoying the opportunity to learn. I know nothing of wavelets and acutance and other jargon, so please be verbose.

    Read the article

  • Reducing time in C# Forms Control.set_Text(string) function

    - by awshepard
    Hoping for a quick answer (which SO seems to be pretty good for)... I just ran a performance analysis with VS2010 on my app, and it turns out that I'm spending about 20% of my time in the Control.set_Text(string) function, as I'm updating labels in quite a few places in my app. The window has a timer object (Forms timer, not Threading timer) that has a timer1_Tick callback, which updates one label every tick (to give a stop-watch sort of effect), and updates about 15 labels once each second. Does anyone have quick suggestions for how to reduce the amount of time spent updating text on a form, other than increasing the update interval? Are there other structures or functions I should be using?

    Read the article

  • Any tips on reducing wxWidgets application code size?

    - by Billy ONeal
    I have written a minimal wxWidgets application: stdafx.h #define wxNO_REGEX_LIB #define wxNO_XML_LIB #define wxNO_NET_LIB #define wxNO_EXPAT_LIB #define wxNO_JPEG_LIB #define wxNO_PNG_LIB #define wxNO_TIFF_LIB #define wxNO_ZLIB_LIB #define wxNO_ADV_LIB #define wxNO_HTML_LIB #define wxNO_GL_LIB #define wxNO_QA_LIB #define wxNO_XRC_LIB #define wxNO_AUI_LIB #define wxNO_PROPGRID_LIB #define wxNO_RIBBON_LIB #define wxNO_RICHTEXT_LIB #define wxNO_MEDIA_LIB #define wxNO_STC_LIB #include <wx/wxprec.h> Minimal.cpp #include "stdafx.h" #include <memory> #include <wx/wx.h> class Minimal : public wxApp { public: virtual bool OnInit(); }; IMPLEMENT_APP(Minimal) DECLARE_APP(Minimal) class MinimalFrame : public wxFrame { DECLARE_EVENT_TABLE() public: MinimalFrame(const wxString& title); void OnQuit(wxCommandEvent& e); void OnAbout(wxCommandEvent& e); }; BEGIN_EVENT_TABLE(MinimalFrame, wxFrame) EVT_MENU(wxID_ABOUT, MinimalFrame::OnAbout) EVT_MENU(wxID_EXIT, MinimalFrame::OnQuit) END_EVENT_TABLE() MinimalFrame::MinimalFrame(const wxString& title) : wxFrame(0, wxID_ANY, title) { std::auto_ptr<wxMenu> fileMenu(new wxMenu); fileMenu->Append(wxID_EXIT, L"E&xit\tAlt-X", L"Terminate the Minimal Example."); std::auto_ptr<wxMenu> helpMenu(new wxMenu); helpMenu->Append(wxID_ABOUT, L"&About\tF1", L"Show the about dialog box."); std::auto_ptr<wxMenuBar> bar(new wxMenuBar); bar->Append(fileMenu.get(), L"&File"); fileMenu.release(); bar->Append(helpMenu.get(), L"&Help"); helpMenu.release(); SetMenuBar(bar.get()); bar.release(); CreateStatusBar(2); SetStatusText(L"Welcome to wxWidgets!"); } void MinimalFrame::OnAbout(wxCommandEvent& e) { wxMessageBox(L"Some text about me!", L"About", wxOK, this); } void MinimalFrame::OnQuit(wxCommandEvent& e) { Close(); } bool Minimal::OnInit() { std::auto_ptr<MinimalFrame> mainFrame( new MinimalFrame(L"Minimal wxWidgets Application")); mainFrame->Show(); mainFrame.release(); return true; } This minimal program weighs in at 2.4MB! (Executable compression drops this to half a MB or so but that's still HUGE!) (I must statically link because this application needs to be single-binary-xcopy-deployed, so both the C runtime and wxWidgets itself are set for static linking) Any tips on cutting this down? (I'm using Microsoft Visual Studio 2010)

    Read the article

  • Reducing load time, or making the user think the load time is less

    - by Malfist
    I've been working on a website, and we've managed to reduce the total content for a page load from 13.7MiB's to 2.4, but the page still takes forever to load. It's a joomla site (ick), and it has a lot of redundant DOM elements (2000+ for the home page), and make 60+ HttpRequest's per page load, counting all the css, js, and image requests. Unlike drupal, joomla won't merge them all on the fly, and they have to be kept separate or else the joomla components will go nuts. What can I do to improve load time? Things I've done: Added colors to dom elements that have large images as their background so the color is loaded, then the image Reduced excessively large images to much smaller file sizes Reduced DOM elements to ~2000, from ~5000 Loading CSS at the start of the page, and javascript at the end Not totally possible, joomla injects it's own javascript and css and it does it at the header, always. Minified most javascript Setup caching and gziping on server Uncached size 2.4MB, cached is ~300KB, but even with so many dom elements, the page takes a good bit of time to render. What more can I do to improve the load time?

    Read the article

  • Reducing a set of non-unique elements via transformations

    - by Andrey Fedorov
    I have: 1) a "starting set" of not-necessarily-unique elements, e.x. { x, y, z, z }, 2) a set of transformations, e.x. (x,z) = y, (z,z) = z, x = z, y = x, and 3) a "target set" that I am trying to get by applying transformations to the starting set, e.x. { z }. I'd like to find an algorithm to generate the (possibly infinite) possible applications of the transformations to the starting set that result in the target set. For example: { x, y, z, z }, y => x { x, x, z, z }, x => z { z, x, z, z }, x => z { z, z, z, z }, (z, z) => z { z, z, z }, (z, z) => z { z, z }, (z, z) => z { z } This sounds like something that's probably an existing (named) problem, but I don't recognize it. Can anyone help me track it down, or suggest further reading on something similar?

    Read the article

  • Reducing width of bar chart series

    - by gAMBOOKa
    Note: by width i mean the height here, I use width because by default, bar charts are vertical and flex uses the width property I want to reduce the width of the following barchart so Type 5, Type 4, Type 3, Type 2, Type 1 are very close to each other. I tried playing with the barWidthRatio, the horizontalAxisRatio and the maxBarWidth properties. Neither is giving me the desired result. I can only manage to reduce the width of the orange bars, how do I reduce the width of the blue bars?

    Read the article

  • Optimally reducing maximum flow

    - by ArIck
    Given a parameter k, I'm trying to delete k edges from a directed graph such that the maximum flow is reduced by as much as possible. The graph has a source s and a sink t, and the capacity of each edge is one. The graph may or may not contain cycles. My proposed solution would be to first perform a topological sorting on the graph, using an algorithm that "forgives" cycles -- perhaps by ignoring edges that lead us back to the source. Then (assuming k = 1): i = 0 for each vertex u order by topological(u) for each edge (u, v) order by topological(v) descending if topological(v) > topological(u) then delete (u, v) if ++i = k then return else // edge doesn't contribute to max flow, ignore Would this work, or am I totally off-track here?

    Read the article

  • I'm having an issue to use GLshort for representing Vertex, and Normal.

    - by Xylopia
    As my project gets close to optimization stage, I notice that reducing Vertex Metadata could vastly improve the performance of 3D rendering. Eventually, I've dearly searched around and have found following advices from stackoverflow. Using GL_SHORT instead of GL_FLOAT in an OpenGL ES vertex array How do you represent a normal or texture coordinate using GLshorts? Advice on speeding up OpenGL ES 1.1 on the iPhone Simple experiments show that switching from "FLOAT" to "SHORT" for vertex and normal isn't tough, but what troubles me is when you're to scale back verticies to their original size (with glScalef), normals are multiplied by the reciprocal of the scale. Natural remedy for this is to multiply the normals w/ scale before you submit to GPU. Then, my short normals almost become 0, because the scale factor is usually smaller than 0. Duh! How do you use "short" for both vertex and normal at the same time? I've been trying this and that for about a full day, but I could only go for "float vertex w/ byte normal" or "short vertex w/ float normal" so far. Your help would be truly appreciated.

    Read the article

  • Reducing a normalized table to one value

    - by Dio
    Hello, I'm sure this has been asked but I'm not quite sure how to properly search for this question, my apologies. I have two tables, Foo and Bar. For has one row per Food, bar has many rows per food matching descriptors. Foo name id Apple 1 Orange 2 Bar id description 1 Tasty 1 Ripe 2 Sweet etc (sorry for the somewhat contrived example). I'm trying to return a query where if, for each row in Foo, Bar contains a descriptor in ('Tasty', 'Juicy') return true ex: Output Apple True Orange False I had been solving this somewhat trivially with a case when I only had one item to match select Foo.name, case bar.description when 'Tasty' then True else 'False' end from Foo left join Bar on foo.id = bar.id where bar.description = 'Tasty' But with multiple items, I keep ending up with extra rows: Output Apple True Apple False etc etc Can someone point me in the right direction on how to think about this problem or what I should be doing? Thank you.

    Read the article

  • Stored Procedure: Reducing Table Data

    - by SumGuy
    Hi Guys, A simple question about Stored Procedures. I have one stored procedure collecting a whole bunch of data in a table. I then call this procedure from within another stored procedure. I can copy the data into a new table created in the calling procedure but as far as I can see the tables have to be identical. Is this right? Or is there a way to insert only the data I want? For example.... I have one procedure which returns this: SELECT @batch as Batch, @Count as Qty, pd.Location, cast(pd.GL as decimal(10,3)) as [Length], cast(pd.GW as decimal(10,3)) as Width, cast(pd.GT as decimal(10,3)) as Thickness FROM propertydata pd GROUP BY pd.Location, pd.GL, pd.GW, pd.GT I then call this procedure but only want the following data: DECLARE @BatchTable TABLE ( Batch varchar(50), [Length] decimal(10,3), Width decimal(10,3), Thickness decimal(10,3), ) INSERT @BatchTable (Batch, [Length], Width, Thickness) EXEC dbo.batch_drawings_NEW @batch So in the second command I don't want the Qty and Location values. However the code above keeps returning the error: "Insert Error: Column name or number of supplied values does not match table"

    Read the article

  • Dynamically reducing image dimension as well as image size in C#

    - by hanesjw
    I have an image gallery that is created using a repeater control. The repeater gets bound inside my code behind file to a table that contains various image paths. The images in my repeater are populated like this <img src='<%# Eval("PicturePath")' %>' height='200px' width='150px'/> (or something along those lines, I don't recall the exact syntax) The problem is sometimes the images themselves are massive so the load times are a little ridiculous. And populating a 150x200px image definitely should not require a 3MB file. Is there a way I can not only change the image dimensions, but shrink the file size down as well? Thanks!

    Read the article

  • doublebuffering not reducing the flicker

    - by fishhead
    I am drawing an grid-work of objects in a Panel. when I scroll the the panel quickly I get a flicker. I thought that enabling the double buffering may take care of this but what I find is that it does not completely draw everything and I am left with blank sections. could anyone give me suggestions as to what may be happening and how I might correct it. UPDATE: I found that I was creating the graphics object with Creategraphics() rather than using the Parameter in the paint method

    Read the article

  • Reducing Dimension For SVM in Sensor Network

    - by iinception
    Hi Everyone, I am looking for some suggestions on a problem that I am currently facing. I have a set of sensor say S1-S100 which is triggered when some event E1-E20 is performed. Assume, normally E1 triggers S1-S20, E2 triggers S15-S30, E3 triggers S20-s50 etc and E1-E20 are completely independent events. Occasionally an event E might trigger any other unrelated sensor. I am using ensemble of 20 svm to analyze each event separately. My features are sensor frequency F1-F100, number of times each sensor is triggered and few other related features. I am looking for a technique that can reduce the dimensionality of the sensor feature(F1-F100)/ or some techniques that encompasses all of the sensor and reduces the dimension too(i was looking for some information theory concept for last few days) . I dont think averaging, maximization is a good idea as I risk loosing information(it did not give me good result). Can somebody please suggest what am I missing here? A paper or some starting idea... Thanks in advance.

    Read the article

  • Reducing unnecessary same values in Class member variables ....

    - by Freshblood
    class A { public int a; public int c; } i will create 10 instances from A.Then i will create 15 instances from A again... go on. first 10 instance will have same value for a variable and next 15 instances will have again same value for a.But I don't mean that both group has same values for a .Problem is create same a value 10 times in first group and 15 times in second group on memory unnecessary. What would be Best solution or solutions for reduce unnecessary datas in this situation?

    Read the article

  • Reducing the space between sections of the UITableView.

    - by flohei
    Hi there, is there a way to reduce the space between two sections of a UITableView? There are about 15 pixel between every single section I have. I did already try to return 0 for -tableView:heightForFooterInSection: and -tableView:heightForHeaderInSection: but that doesn't change anything. Any suggestions? Thanks in advance. –f

    Read the article

  • Reducing Duplicated Code

    - by cam
    I have some code that works on the color structure like this public void ChangeColor() { thisColor.R = thisColor.R + 5; } Now I need to make a method that changes a different variable depending on what it is passed. Here is what the code looks like now. public void ChangeColor(int RGBValue) { switch(RGBValue) { case 1: thisColor.R = thisColor.R + 5; break; case 2: thiscolor.B = thisColor.B + 5; break; } } Now, this is something I would normally never question, I'd just throw a #region statement around it and call it a day, but this is just an example of what I have, the actual function is quite long. I want it to look like this: public void ChangeColor(int RGBValue) { thiscolor.RGBValue = thiscolor.RGBValue; } So essentially the value would refer to the variable being used. Is there a name for this? Is this what Reflection is for? Or something like that... Is there a way to do this?

    Read the article

  • Finding files with bash and copy to another location and reducing depth of folders

    - by Kevin F
    I'm trying to recover a mates hard drive, there is no structure what so ever so music and images are everywhere but in named folders sometimes 5 folders deep, I've managed to write a one-liner that finds the files and copies them to a mounted drive but it preserves the file structure completely. What I'm after is a bit of code that searches the drive and copies to another location and copies just the parent folder with the mp3/jpg files within and not the complete path. The other issue I have is the music is /folder/folder/folder/Artist/1.mp3..2.mp3..10.mp3 etc etc so I have to preserve the folder 'Artist' to give him any hope of finding his tracks again. What I have working currently: find /media/HP/ -name *.mp3 -fprintf /media/HP/MUSIC/Script.sh 'mkdir -p "/media/HP/MUSIC/%h" \n cp "%h/%f" "/media/HP/MUSIC/%h/"\n' I then run the script.sh and it does all the copying. Many Thanks

    Read the article

  • Reducing similar commands

    - by Kevin
    How can I reduce the amount of similar commands into a loop? Something like pictureBox7.BackColor = Color.FromArgb(187, 187, 187); pictureBox9.BackColor = Color.FromArgb(187, 187, 187); pictureBox10.BackColor = Color.FromArgb(187, 187, 187); pictureBox11.BackColor = Color.FromArgb(187, 187, 187); pictureBox12.BackColor = Color.FromArgb(187, 187, 187); pictureBox13.BackColor = Color.FromArgb(187, 187, 187); G2g for now, will add on later.

    Read the article

  • Reducing lag when downloading large amount of data from webpage

    - by Mahir
    I am getting data via RSS feeds and displaying each article in a table view cell. Each cell has an image view, set to a default image. If the page has an image, the image is to be replaced with the image from the article. As of now, each cell downloads the source code from the web page, causing the app to lag when I push the view controller and when I try scrolling. Here is what I have in the cellForRowAtIndexPath: method. NSString * storyLink = [[stories objectAtIndex: storyIndex] objectForKey: @"link"]; storyLink = [storyLink stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]]; NSString *sourceCode = [NSString stringWithContentsOfURL:[NSURL URLWithString:storyLink] encoding:NSUTF8StringEncoding error:&error]; NSString *startPt = @"instant-gallery"; NSString *startPt2 = @"<img src=\""; if ([sourceCode rangeOfString:startPt].length != 0) { //webpage has images // find the first "<img src=...>" tag starting from "instant-gallery" NSString *trimmedSource = [sourceCode substringFromIndex:NSMaxRange([sourceCode rangeOfString:startPt])]; trimmedSource = [trimmedSource substringFromIndex:NSMaxRange([trimmedSource rangeOfString:startPt2])]; trimmedSource = [trimmedSource substringToIndex:[trimmedSource rangeOfString:@"\""].location]; NSURL *url = [NSURL URLWithString:trimmedSource]; NSData *data = [NSData dataWithContentsOfURL:url]; UIImage *image = [UIImage imageWithData:data]; cell.picture.image = image; Someone suggested using NSOperationQueue. Would this way be a good solution? EDIT: - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *MyIdentifier = @"FeedCell"; LMU_LAL_FeedCell *cell = [tableView dequeueReusableCellWithIdentifier:MyIdentifier]; if (cell == nil) { NSArray *nib = [[NSBundle mainBundle] loadNibNamed:@"LMU_LAL_FeedCell" owner:self options:nil]; cell = (LMU_LAL_FeedCell*) [nib objectAtIndex:0]; } int storyIndex = [indexPath indexAtPosition: [indexPath length] - 1]; NSString *untrimmedTitle = [[stories objectAtIndex: storyIndex] objectForKey: @"title"]; cell.title.text = [untrimmedTitle stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceCharacterSet]]; CGSize maximumLabelSize = CGSizeMake(205,9999); CGSize expectedLabelSize = [cell.title.text sizeWithFont:cell.title.font constrainedToSize:maximumLabelSize]; //adjust the label to the the new height. CGRect newFrame = cell.title.frame; newFrame.size.height = expectedLabelSize.height; cell.title.frame = newFrame; //position frame of date label CGRect dateNewFrame = cell.date.frame; dateNewFrame.origin.y = cell.title.frame.origin.y + cell.title.frame.size.height + 1; cell.date.frame = dateNewFrame; cell.date.text = [self formatDateAtIndex:storyIndex]; dispatch_queue_t someQueue = dispatch_queue_create("cell background queue", NULL); dispatch_async(someQueue, ^(void){ NSError *error = nil; NSString * storyLink = [[stories objectAtIndex: storyIndex] objectForKey: @"link"]; storyLink = [storyLink stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]]; NSString *sourceCode = [NSString stringWithContentsOfURL:[NSURL URLWithString:storyLink] encoding:NSUTF8StringEncoding error:&error]; NSString *startPt = @"instant-gallery"; NSString *startPt2 = @"<img src=\""; if ([sourceCode rangeOfString:startPt].length != 0) { //webpage has images // find the first "<img src=...>" tag starting from "instant-gallery" NSString *trimmedSource = [sourceCode substringFromIndex:NSMaxRange([sourceCode rangeOfString:startPt])]; trimmedSource = [trimmedSource substringFromIndex:NSMaxRange([trimmedSource rangeOfString:startPt2])]; trimmedSource = [trimmedSource substringToIndex:[trimmedSource rangeOfString:@"\""].location]; NSURL *url = [NSURL URLWithString:trimmedSource]; NSData *data = [NSData dataWithContentsOfURL:url]; UIImage *image = [UIImage imageWithData:data]; dispatch_async(dispatch_get_main_queue(), ^(void){ cell.picture.image = image; }); }) //error: expected expression } return cell; //error: expected identifier } //error extraneous closing brace

    Read the article

  • reducing loading time of 100 pages of google

    - by Nikhil
    for my project i need to access entire pages(100) of google at a time for a particular keyword.I used 'for' loop for accessing pages in url written in my c# code.But it is taking more time to access.Some times it showing HttpRequest error.Any way to increase the speed?

    Read the article

  • Can reducing index length in Javascript associative array save memory

    - by d777
    I am trying to build a large Associative Array in JavaScript (22,000 elements). Do I need to worry about the length of the indices with regards to memory usage? In other words, which of the following options saves memory? or are they the same in memory consumption? Option 1: var student = new Array(); for (i=0; i<22000; i++) student[i] = { "studentName": token[0], "studentMarks": token[1], "studentDOB": token[2] }; Option 2: var student = new Array(); for (i=0; i<22000; i++) student[i] = { "n": token[0], "m": token[1], "d": token[2] }; I tried to test this on Google Chrome DevTools, but the numbers are inconsistent to make a decision. My best guess is that because the Array indices repeat, the browser can optimize memory usage by not repeating them for each student[i], but that is just a guess. Thanks.

    Read the article

  • Combining data sets without losing observations in SAS

    - by John
    Hye guys, I know, another post another problem :D :(. I took a screenshot to easily explain my problem. http://i39.tinypic.com/rhms0h.jpg As you can see I want to merge two tables (again), the Base & Analyst table. What I want to achieve is displayed in the right bottom corner table. I’m calculating the number of total analysts and female analysts for each month in the analyst table. In the base table I have different observations for one company (here company Alcoa with ticker AA). When I use the following command: data want; merge base analyst; by month ; run; I get the right up corner problem. My observations in the main table are being narrowed down to only 4 observations (for each different year one observation, 2001, 2002, 2005, 2006). What I want is that the observations are not reduced but that for every year the same data is being placed as shown in the right bottom corner. What am I missing in my merge command? In both tables I have month as a time count variable ( the observations in my base table are monthly) on which I need to merge. For clarity I added 2 screenshots of my real databases in SAS. The base table: http://i42.tinypic.com/dr5jky.jpg The analyst table: http://i40.tinypic.com/eqpmqq.jpg Here is what my merged table looks like: http://i43.tinypic.com/116i62s.jpg You can clearly see that the merged table only has four observations left for AA (one for each unique year) instead of the original 8. Anyone an idea to solve this?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >