Search Results

Search found 20890 results on 836 pages for 'self reference'.

Page 700/836 | < Previous Page | 696 697 698 699 700 701 702 703 704 705 706 707  | Next Page >

  • Force a view change from a button when using UITabBarController

    - by user342197
    Hello - When using a UITabBarController, when the user enters some data on View1 and presses a button, I need to perform some calculations and present the results on View2. I have an AppDelegate, View1Controller, View2Controller, and View3Controller (View3 is a basically static view). My AppDelgate declares UITabBarController *rootController; On View1, I have the calculations being performed in an IBAction for buttonPressed; however, I can't seem to force the view to switch to View2 programmatically. I have done a lot of searching for similar problems, and think I should be doing something like "self.rootController.selectedIndex = 1"; however,when I do this from within buttonPressed on my View1Controller, I get an error "request for member rootController in something not in a structure or union". I think I'm missing something basic here... probably need do do something with my AppDelegate, but I'm banging my head against the wall. Can anyone provide some guidance in this situation...like key things I should do in View1Controller header and implementation with reference to my AppDelgate? Thank you!

    Read the article

  • How to manipulate *huge* amounts of data

    - by Alejandro
    Hi there! I'm having the following problem. I need to store huge amounts of information (~32 GB) and be able to manipulate it as fast as possible. I'm wondering what's the best way to do it (combinations of programming language + OS + whatever you think its important). The structure of the information I'm using is a 4D array (NxNxNxN) of double-precission floats (8 bytes). Right now my solution is to slice the 4D array into 2D arrays and store them in separate files in the HDD of my computer. This is really slow and the manipulation of the data is unbearable, so this is no solution at all! I'm thinking on moving into a Supercomputing facility in my country and store all the information in the RAM, but I'm not sure how to implement an application to take advantage of it (I'm not a professional programmer, so any book/reference will help me a lot). An alternative solution I'm thinking on is to buy a dedicated server with lots of RAM, but I don't know for sure if that will solve the problem. So right now my ignorance doesn't let me choose the best way to proceed. What would you do if you were in this situation? I'm open to any idea. Thanks in advance!

    Read the article

  • Are MEF's ComposableParts contracts instance-based?

    - by Dave
    I didn't really know how to phrase the title of my questions, so my apologies in advance. I read through parts of the MEF documentation to try to find the answer to my question, but couldn't find it. I'm using ImportMany to allow MEF to create multiple instances of a specific plugin. That plugin Imports several parts, and within calls to a specific instance, it wants these Imports to be singletons. However, what I don't want is for all instances of this plugin to use the same singleton. For example, let's say my application ImportManys Blender appliances. Every time I ask for one, I want a different Blender. However, each Blender Imports a ControlPanel. I want each Blender to have its own ControlPanel. To make things a little more interesting, each Blender can load BlendPrograms, which are also contained within their own assemblies, and MEF takes care of this loading. A BlendProgram might need to access the ControlPanel to get the speed, but I want to ensure that it is accessing the correct ControlPanel (i.e. the one that is associated with the Blender that is associated with the program!) This diagram might clear things up a little bit: As the note shows, I believe that the confusion could come from an inherently-poor design. The BlendProgram shouldn't touch the ControlPanel directly, and instead perhaps the BlendProgram should get the speed via the Blender, which will then delegate the request to its ControlPanel. If this is the case, then I assume the BlendProgram needs to have a reference to a specific Blender. In order to do this, is the right way to leverage MEF and use an ImportingConstructor for BlendProgram, i.e. [ImportingConstructor] public class BlendProgram : IBlendProgram { public BlendProgram( Blender blender) {} } And if this is the case, how do I know that MEF will use the intended Blender plugin?

    Read the article

  • Passing IDisposable objects through constructor chains

    - by Matt Enright
    I've got a small hierarchy of objects that in general gets constructed from data in a Stream, but for some particular subclasses, can be synthesized from a simpler argument list. In chaining the constructors from the subclasses, I'm running into an issue with ensuring the disposal of the synthesized stream that the base class constructor needs. Its not escaped me that the use of IDisposable objects this way is possibly just dirty pool (plz advise?) for reasons I've not considered, but, this issue aside, it seems fairly straightforward (and good encapsulation). Codes: abstract class Node { protected Node (Stream raw) { // calculate/generate some base class properties } } class FilesystemNode : Node { public FilesystemNode (FileStream fs) : base (fs) { // all good here; disposing of fs not our responsibility } } class CompositeNode : Node { public CompositeNode (IEnumerable some_stuff) : base (GenerateRaw (some_stuff)) { // rogue stream from GenerateRaw now loose in the wild! } static Stream GenerateRaw (IEnumerable some_stuff) { var content = new MemoryStream (); // molest elements of some_stuff into proper format, write to stream content.Seek (0, SeekOrigin.Begin); return content; } } I realize that not disposing of a MemoryStream is not exactly a world-stopping case of bad CLR citizenship, but it still gives me the heebie-jeebies (not to mention that I may not always be using a MemoryStream for other subtypes). It's not in scope, so I can't explicitly Dispose () it later in the constructor, and adding a using statement in GenerateRaw () is self-defeating since I need the stream returned. Is there a better way to do this? Preemptive strikes: yes, the properties calculated in the Node constructor should be part of the base class, and should not be calculated by (or accessible in) the subclasses I won't require that a stream be passed into CompositeNode (its format should be irrelevant to the caller) The previous iteration had the value calculation in the base class as a separate protected method, which I then just called at the end of each subtype constructor, moved the body of GenerateRaw () into a using statement in the body of the CompositeNode constructor. But the repetition of requiring that call for each constructor and not being able to guarantee that it be run for every subtype ever (a Node is not a Node, semantically, without these properties initialized) gave me heebie-jeebies far worse than the (potential) resource leak here does.

    Read the article

  • boost::shared_ptr in Objective-C++

    - by John Smith
    This is a better understanding of a question I had earlier. I have the following Objective-C++ object @interface OCPP { MyCppobj * cppobj; } @end @implementation OCPP -(OCPP *) init { cppobj = new MyCppobj; } @end Then I create a completely differently obj which needs to use cppobj in a boost::shared_ptr (I have no choice in this matter, it's part of a huge library which I cannot change) @interface NOBJ -(void) use_cppobj_as_shared_ptr { //get an OCPP obj called occ from somewhere.. //troubling line here } @end I have tried the following and that failed: I tried synthesising cppobj. Then I created a shared_ptr in "troubling line" in the following way: MyCppobj * cpp = [occ cppobj]; bsp = boost::shared_ptr<MyCppobj>(cpp); It works fine first time around. Then I destroy the NOBJ and recreate it. When I for cppobj it's gone. Presumably shared_ptr decided it's no longer needed and did away with it. So I need help. How can I keep cppobj alive? Is there a way to destroy bsp (or it's reference to cppobj) without destroying cppobj?

    Read the article

  • g++ linker can't find const member function

    - by Max
    I have a Point class (with integer members x and y) that has a member function withinBounds that is declared like so: bool withinBounds(const Point&, const Point&) const; and defined like this: bool Point::withinBounds(const Point& TL, const Point& BR) const { if(x < TL.getX()) return false; if(x > BR.getX()) return false; if(y < TL.getY()) return false; if(y > BR.getY()) return false; // Success return true; } and then in another file, I call withinBounds like this: Point pos = currentPlayer->getPosition(); if(pos.withinBounds(topleft, bottomright)) { // code block } This compiles fine, but it fails to link. g++ gives me this error: /home/max/Desktop/Development/YARL/yarl/src/GameData.cpp:61: undefined reference to 'yarl::utility::Point::withinBounds(yarl::utility::Point const&, yarl::utility::Point const&)' When I make the function not const, it links fine. Anyone know the reason why? The linker error looks like it's looking for a non-const version of the function, but I don't know why it would.

    Read the article

  • JUnit confusion: use 'extend Testcase' or '@Test' ?

    - by Rabarberski
    I've found the proper use (or at least the documentation) of JUnit very confusing. This question serves both as a future reference and as a real question. If I've understood correctly, there are two main approaches to create and run a JUnit test: Approach A: create a class that extends TestCase, and start test methods with the word test. When running the class as a JUnit Test (in Eclipse), all methods starting with the word test are automatically run. import junit.framework.TestCase; public class DummyTestA extends TestCase { public void testSum() { int a = 5; int b = 10; int result = a + b; assertEquals(15, result); } } Approach B: create a 'normal' class and prepend a @Test annotation to the method. Note that you do NOT have to start the method with the word test. import org.junit.*; import static org.junit.Assert.*; public class DummyTestB { @Test public void Sum() { int a = 5; int b = 10; int result = a + b; assertEquals(15, result); } } Mixing the two seems not to be a good idea, see e.g. this stackoverflow question: Now, my questions(s): What is the preferred approach, or when would you use one instead of the other? Approach B allows for testing for exceptions by extending the @Test annotation like in @Test(expected = ArithmeticException.class). But how do you test for exceptions when using approach A? When using approach A, you can group a number of test classes in a test suite. TestSuite suite = new TestSuite("All tests");<br/> suite.addTestSuite(DummyTestA.class); suite.addTestSuite(DummyTestAbis.class);` But this can't be used with approach B (since each testclass should subclass TestCase). What is the proper way to group tests for approach B?

    Read the article

  • PHP cURL error: "Empty reply from server"

    - by ABach
    All, I have a class function to interface with the RESTful API for Last.FM - its purpose is to grab the most recent tracks for my user. Here it is: private static $base_url = 'http://ws.audioscrobbler.com/2.0/'; public static function getTopTracks($options = array()) { $options = array_merge(array( 'user' => 'bachya', 'period' => NULL, 'api_key' => 'xxxxx...', // obfuscated, obviously ), $options); $options['method'] = 'user.getTopTracks'; // Initialize cURL request and set parameters $ch = curl_init(); curl_setopt_array($ch, array( CURLOPT_URL => self::$base_url, CURLOPT_POST => TRUE, CURLOPT_POSTFIELDS => $options, CURLOPT_RETURNTRANSFER => TRUE, CURLOPT_TIMEOUT => 30, CURLOPT_USERAGENT => 'Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)' )); $results = curl_exec($ch); return $results; } This returns "Empty reply from server". I know that some have suggested that this error comes from some fault in network infrastructure; I do not believe this to be true in my case. If I run a cURL request through the command line, I get my data; the Last.FM service is up and accessible. Before I go to those folks and see if anything has changed, I wanted to check with you fine folks and see if there's some issue in my code that would be causing this. Thanks!

    Read the article

  • how to implement a sparse_vector class

    - by Neil G
    I am implementing a templated sparse_vector class. It's like a vector, but it only stores elements that are different from their default constructed value. So, sparse_vector would store the index-value pairs for all indices whose value is not T(). I am basing my implementation on existing sparse vectors in numeric libraries-- though mine will handle non-numeric types T as well. I looked at boost::numeric::ublas::coordinate_vector and eigen::SparseVector. Both store: size_t* indices_; // a dynamic array T* values_; // a dynamic array int size_; int capacity_; Why don't they simply use vector<pair<size_t, T>> data_; My main question is what are the pros and cons of both systems, and which is ultimately better? The vector of pairs manages size_ and capacity_ for you, and simplifies the accompanying iterator classes; it also has one memory block instead of two, so it incurs half the reallocations, and might have better locality of reference. The other solution might search more quickly since the cache lines fill up with only index data during a search. There might also be some alignment advantages if T is an 8-byte type? It seems to me that vector of pairs is the better solution, yet both containers chose the other solution. Why?

    Read the article

  • ruby/datamapper: Refactor class methods to module

    - by DeSchleib
    Hello, i've the following code and tried the whole day to refactor the class methods to a sperate module to share the functionality with all of my model classes. Code (http://pastie.org/974847): class Merchant include DataMapper::Resource property :id, Serial [...] class << self @allowed_properties = [:id,:vendor_id, :identifier] alias_method :old_get, :get def get *args [...] end def first_or_create_or_update attr_hash [...] end end end I'd like to archive something like: class Merchant include DataMapper::Resource include MyClassFunctions [...] end module MyClassFunctions def get [...] def first_or_create_or_update[...] end => Merchant.allowed_properties = [:id] => Merchant.get( :id=> 1 ) But unfortunately, my ruby skills are to bad. I read a lot of stuff (e.g. here) and now i'm even more confused. I stumbled over the following two points: alias_method will fail, because it will dynamically defined in the DataMapper::Resource module. How to get a class method allowed_properties due including a module? What's the ruby way to go? Many thanks in advance.

    Read the article

  • How to stop Android GPS using "Mobile data"

    - by prepbgg
    My app requests location updates with "minTime" set to 2 seconds. When "Mobile data" is switched on (in the phone's settings) and GPS is enabled the app uses "mobile data" at between 5 and 10 megabytes per hour. This is recorded in the ICS "Data usage" screen as usage by "Android OS". In an attempt to prevent this I have unticked Settings-"Location services"-"Google's location service". Does this refer to Assisted GPS, or is it something more than that? Whatever it is, it seems to make no difference to my app's internet access. As further confirmation that it is the GPS usage by my app that is causing the mobile data access I have observed that the internet data activity indicator on the status bar shows activity when and only when the GPS indicator is present. The only way to prevent this mobile data usage seems to be to switch "Mobile data" off, and GPS accuracy seems to be almost as good without the support of mobile data. However, it is obviously unsatisfactory to have to switch mobile data off. The only permissions in the Manifest are "android.permission.ACCESS_FINE_LOCATION" (and "android.permission.WRITE_EXTERNAL_STORAGE"), so the app has no explicit permission to use internet data. The LocationManager code is ` criteria.setAccuracy(Criteria.ACCURACY_FINE); criteria.setSpeedRequired(false); criteria.setAltitudeRequired(false); criteria.setBearingRequired(true); criteria.setCostAllowed(false); criteria.setPowerRequirement(Criteria.NO_REQUIREMENT); bestProvider = lm.getBestProvider(criteria, true); if (bestProvider != null) { lm.requestLocationUpdates(bestProvider, gpsMinTime, gpsMinDistance, this); ` The reference for LocationManager.getBestProvider says If no provider meets the criteria, the criteria are loosened ... Note that the requirement on monetary cost is not removed in this process. However, despite setting setCostAllowed to false the app still incurs a potential monetary cost. What else can I do to prevent the app from using mobile data?

    Read the article

  • Optimization headers for UITableView?

    - by Pask
    I have an optimization problem for the headers of a table with plain style. If I use the standard view for the table (the classic gray with titles set by titleForHeaderInSection:) everything is ok and the scrolling is smooth and immediate. When, instead, use this code to set my personal view: - (UIView *)tableView:(UITableView *)tableView viewForHeaderInSection:(NSInteger)section { return [self headerPerTitolo:[titoliSezioni objectAtIndex:section]]; } - (UIImageView *)headerPerTitolo:(NSString *)titolo { UIImageView *headerView = [[[UIImageView alloc] initWithFrame:CGRectMake(10.0, 0.0, 320.0, 44.0)] autorelease]; headerView.image = [UIImage imageNamed:kNomeImmagineHeader]; headerView.alpha = kAlphaSezioniTablePlain; UILabel * headerLabel = [[[UILabel alloc] initWithFrame:CGRectZero] autorelease]; headerLabel.backgroundColor = [UIColor clearColor]; headerLabel.opaque = NO; headerLabel.textColor = [UIColor whiteColor]; headerLabel.font = [UIFont boldSystemFontOfSize:16]; headerLabel.frame = CGRectMake(10.0,-11.0, 320.0, 44.0); headerLabel.textAlignment = UITextAlignmentLeft; headerLabel.text = titolo; [headerView addSubview:headerLabel]; return headerView; } scrolling is jerky and not immediate (sliding the finger on the screen does not match an immediate shift of the table). I do not know what caused this problem, maybe the fact that every time the method viewForHeaderInSection: is called, the code runs to create a new UIImageView. I tried many ways to solve the problem, such as creating an array of all the necessary view: apart from more time spent loading at startup, there is a continuing problem of low reactivity of the table. 've Attempted by reducing the size of UIImageView positioned from about 66 KB to 4 KB: not only has a deterioration in quality of colors (which distorts a bit 'original graphics), but ... the problem persists! Perhaps you have suggestions about it, or know me obscure techniques that enable me to optimize this aspect of my application ... I apologize for my English, I used Google for translation.

    Read the article

  • Ruby ROXML - how to get an array to render its xml?

    - by Brian D.
    I'm trying to create a Messages object that inherits Array. The messages will gather a group of Message objects. I'm trying to create an xml output with ROXML that looks like this: <messages> <message> <type></type> <code></code> <body></body> </message> ... </messages> However, I can't figure out how to get the message objects in the Messages object to display in the xml. Here is the code I've been working with: require 'roxml' class Message include ROXML xml_accessor :type xml_accessor :code xml_accessor :body end class Messages < Array include ROXML # I think this is the problem - but how do I tell ROXML that # the messages are in this instance of array? xml_accessor :messages, :as => [Message] def add(message) self << message end end message = Message.new message.type = "error" message.code = "1234" message.body = "This is a test message." messages = Messages.new messages.add message puts messages.length puts messages.to_xml This outputs: 1 <messages/> So, the message object I added to messages isn't getting displayed. Anyone have any ideas? Or am I going about this the wrong way? Thanks for any help.

    Read the article

  • returning autorelease NSString still causes memory leaks

    - by hookjd
    I have a simple function that returns an NSString after decoding it. I use it a lot throughout my application, and it appears to create a memory leak (according to "leaks" tool) every time I use it. Leaks tells me the problem is on the line where I alloc the NSString that I am going to return, even though I autorelease it. Here is the function: -(NSString *) decodeValue { NSString *newString; newString = [self stringByReplacingOccurrencesOfString:@"#" withString:@"$"]; NSData *stateData = [NSData dataWithBase64EncodedString:newString]; NSString *convertState = [[[NSString alloc] initWithData:stateData encoding:NSUTF8StringEncoding] autorelease]; return convertState; } My understanding of [autorelease] is that it should be used in exactly this way... where I want to hold onto the object just long enough to return it in my function and then let the object be autoreleased later. So I believe I can use this function through code like this without manually releasing anything: NSString *myDecodedString = [myString decodeValue]; But this process is reporting leaks and I don't understand how to change it to avoid the leaks. What am I doing wrong?

    Read the article

  • WCF service consuming passively issued SAML token

    - by Neillyboy
    What is the best way to pass an existing SAML token from a website already authenticated via a passive STS? We have built an Identity Provider which is issuing passive claims to the website for authentication. We have this working. Now we would like to add some WCF services into the mix - calling them from the context of the already authenticated web application. Ideally we would just like to pass the SAML token on without doing anything to it (i.e. adding new claims / re-signing). All of the examples I have seen require the ActAs sts implementation - but is this really necessary? This seems a bit bloated for what we want to achieve. I would have thought a simple implementation passing the bootstrap token into the channel - using the CreateChannelActingAs or CreateChannelWithIssuedToken mechanism (and setting ChannelFactory.Credentials.SupportInteractive = false) to call the WCF service with the correct binding (what would that be?) would have been enough. We are using the Fabrikam example code as reference, but as I say, think the ActAs functionality here is overkill for what we are trying to achieve.

    Read the article

  • ASP.NET MVC Newbie: why isn't my model available when creating a strongly-typed view?

    - by Rax Olgud
    I'm starting with ASP.NET MVC, and working with NerdDinner as reference. I did the following: Created a new project Added a couple of tables Created LINQ to SQL for them Created a new controller My Models directory now contains MyModel.dbml, under which I have MyModel.designer.cs, that contains classes for models relating to both of my tables (let's call them Categories and Products). Now, under my new controller, I would like to create a strongly typed view. So for example I have the following code in my controller (for my application I must work by name, and the "Name" field is unique): public ActionResult Details(string name) { MyModelDataContext db = new MyModelDataContext(); Product user = db.Products.Single(t => t.Name == name); return View(user); } I would like to create a strongly-typed view. So I right-click the line "return View(user)", and choose "Add View...". I click "Create a strongly-typed view". When I click the dropdown of "View data class", I don't see my models, but only: MyProject.Controllers.AccountMembershipService MyProject.Controllers.FormsAuthenticationService What am I missing?

    Read the article

  • What is the base open source java package to filter/match URLs?

    - by Boaz
    Hi, I have an high performance application which deals with URLs. For every URL it needs to retrieve the appropriate settings from a predefined pool. Every settings object is associated with a URL pattern which indicates which URLs should use these settings. The matching rules are as follows: "google.com" match pattern should match all URLs pointing to the google domain (thus, maps.google.com and www.google.com/match are matched). "*.google.com" should match all URLs pointing to a subdomain of google.com (thus, maps.google.com matches, but google.com and www.google.com don't). "maps.google.com" should match all URLs pointing to this specific subdomain. Apart from the above rules, every match rule can contain a path, which means that the path part of the URL should start with the match rule path. So: "*.google.com/maps" matches "maps.google.com/maps" but not "maps.google.com/advanced". As you can see the rules above are overlapping. In the case two rules exist which match the same URL the most specific should apply. The list above is ranked from least specific to most specific. This seems to be such a standard problem that I was hoping to use a ready made library rather than program my self. Google reveals a couple of options but without a clear way to choose between them. What would you recommend as a good library for this task? Thanks, Boaz

    Read the article

  • Why can't pass Marshaled interface as integer(or pointer)

    - by cemick
    I passed ref of interface from Visio Add-ins to MyCOMServer (http://stackoverflow.com/questions/2455183/interface-marshalling-in-delphi).I have to pass interface as pointer in internals method of MyCOMServer. I try to pass interface to internal method as pointer of interface, but after back cast when i try call method of interface I get exception. Simple example(Fisrt block execute without error, but At Second block I get Exception after addressed to property of IVApplication interface): procedure TMyCOMServer.test(const Interface_:IDispatch); stdcall; var IMy:_IMyInterface; V: Variant; Str: String; I: integer; Vis: IVApplication; begin ...... Self.QuaryInterface(_IMyInterface,IMy); str := IA.ApplicationName; V := Integer(IMy); i := V; Pointer(IMy) := Pointer(i); str := IMy.SomeProperty; // normal completion str := (Interface_ as IVApplication).Path; V := Interface_; I := V; Pointer(Vis) := Pointer(i); str := Vis.Path; // 'access violation at 0x76358e29: read of address 0xfeeefeee' end; Why I can't do like this?

    Read the article

  • WPF - How to stop an ItemsControl psuedo-grid's columns from dancing/jumping around during layout

    - by Drew Noakes
    Several other questions on SO have come to the same conclusion I have -- using an ItemsControl with a DataTemplate for each item constructed to position items such that they resemble a grid is much simpler (especially to format) than using a ListView. The code resembles: <StackPanel Grid.IsSharedSizeScope="True"> <!-- Header --> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" SharedSizeGroup="Column1" /> <ColumnDefinition Width="Auto" SharedSizeGroup="Column2" /> </Grid.ColumnDefinitions> <TextBlock Grid.Column="0" Text="Column Header 1" /> <TextBlock Grid.Column="1" Text="Column Header 2" /> </Grid> <!-- Items --> <ItemsControl ItemsSource="{Binding Path=Values, Mode=OneWay}"> <ItemsControl.ItemTemplate> <DataTemplate> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" SharedSizeGroup="Column1" /> <ColumnDefinition Width="Auto" SharedSizeGroup="Column2" /> </Grid.ColumnDefinitions> <TextBlock Grid.Column="0" Text="{Binding ColumnProperty1}" /> <TextBlock Grid.Column="1" Text="{Binding ColumnProperty2}" /> </Grid> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> </StackPanel> The problem I'm seeing is that whenever I swap the object to which the ItemsSource is bound (it's an ObservableCollection that I replace the reference to, rather than clear and re-add), the entire 'grid' dances about for a few seconds. Presumably it is making a few layout passes to get all the Auto-width columns to match up. This is very distracting for my users and I'd like to get it sorted out. Has anyone else seen this?

    Read the article

  • Image resizing - sometimes very poor quality?!

    - by eWolf
    I'm resizing some images to the screen resolution of the user; if the aspect ratio is wrong, the image should be cut. My code looks like this: protected void ConvertToBitmap(string filename) { var origImg = System.Drawing.Image.FromFile(filename); var widthDivisor = (double)origImg.Width / (double)System.Windows.Forms.Screen.PrimaryScreen.Bounds.Width; var heightDivisor = (double)origImg.Height / (double)System.Windows.Forms.Screen.PrimaryScreen.Bounds.Height; int newWidth, newHeight; if (widthDivisor < heightDivisor) { newWidth = (int)((double)origImg.Width / widthDivisor); newHeight = (int)((double)origImg.Height / widthDivisor); } else { newWidth = (int)((double)origImg.Width / heightDivisor); newHeight = (int)((double)origImg.Height / heightDivisor); } var newImg = origImg.GetThumbnailImage(newWidth, newHeight, null, IntPtr.Zero); newImg.Save(this.GetBitmapPath(filename), System.Drawing.Imaging.ImageFormat.Bmp); } In most cases, this works fine. But for some images, the result has an extremely poor quality. It looks like the would have been resized to something very small (thumbnail size) and enlarged again.. But the resolution of the image is correct. What can I do? Example orig image: Example resized image: Note: I have a WPF application but I use the WinForms function for resizing because it's easier and because I already need a reference to System.Windows.Forms for a tray icon.

    Read the article

  • Unity Configuration and Same Assembly

    - by tyndall
    I'm currently getting an error trying to resolve my IDataAccess class. The value of the property 'type' cannot be parsed. The error is: Could not load file or assembly 'TestProject' or one of its dependencies. The system cannot find the file specified. (C:\Source\TestIoC\src\TestIoC\TestProject\bin\Debug\TestProject.vshost.exe.config line 14) This is inside a WPF Application project. What is the correct syntax to refer to the Assembly you are currently in? is there a way to do this? I know in a larger solution I would be pulling Types from seperate assemblies so this might not be an issue. But what is the right way to do this for a small self-contained test project. Note: I'm only interested in doing the XML config at this time, not the C# (in code) config. UPDATE: see all comments My XML config: <configuration> <configSections> <section name="unity" type="Microsoft.Practices.Unity.Configuration.UnityConfigurationSection, Microsoft.Practices.Unity.Configuration" /> </configSections> <unity> <typeAliases> <!-- Lifetime manager types --> <typeAlias alias="singleton" type="Microsoft.Practices.Unity.ContainerControlledLifetimeManager, Microsoft.Practices.Unity" /> <typeAlias alias="external" type="Microsoft.Practices.Unity.ExternallyControlledLifetimeManager, Microsoft.Practices.Unity" /> <typeAlias alias="IDataAccess" type="TestProject.IDataAccess, TestProject" /> <typeAlias alias="DataAccess" type="TestProject.DataAccess, TestProject" /> </typeAliases> <containers> <container name="Services"> <types> <type type="IDataAccess" mapTo="DataAccess" /> </types> </container> </containers> </unity> </configuration>

    Read the article

  • Java Random Slowdowns on Mac OS cont'd

    - by javajustice
    I asked this question a few weeks ago, but I'm still having the problem and I have some new hints. The original question is here: http://stackoverflow.com/questions/1651887/java-random-slowdowns-on-mac-os Basically, I have a java application that splits a job into independent pieces and runs them in separate threads. The threads have no synchronization or shared memory items. The only resources they do share are data files on the hard disk, with each thread having an open file channel. Most of the time it runs very fast, but occasionally it will run very slow for no apparent reason. If I attach a CPU profiler to it, then it will start running quickly again. If I take a CPU snapshot, it says its spending most of its time in "self time" in a function that doesn't do anything except check a few (unshared unsynchronized) booleans. I don't know how this could be accurate because 1, it makes no sense, and 2, attaching the profiler seems to knock the threads out of whatever mode they're in and fix the problem. Also, regardless of whether it runs fast or slow, it always finishes and gives the same output, and it never dips in total cpu usage (in this case ~1500%), implying that the threads aren't getting blocked. I have tried different garbage collectors, different sizings the parts of the memory space, writing data output to non-raid drives, and putting all data output in threads separate the main worker threads. Does anyone have any idea what kind of problem this could be? Could it be the operating system (OS X 10.6.2) ? I have not been able to duplicate it on a windows machine, but I don't have one with a similar hardware configuration.

    Read the article

  • Enums, Constructor overloads with similar conversions.

    - by David Thornley
    Why does VisualC++ (2008) get confused 'C2666: 2 overloads have similar conversions' when I specify an enum as the second parameter, but not when I define a bool type? Shouldn't type matching already rule out the second constructor because it is of a 'basic_string' type? #include <string> using namespace std; enum EMyEnum { mbOne, mbTwo }; class test { public: #if 1 // 0 = COMPILE_OK, 1 = COMPILE_FAIL test(basic_string<char> myString, EMyEnum myBool2) { } test(bool myBool, bool myBool2) { } #else test(basic_string<char> myString, bool myBool2) { } test(bool myBool, bool myBool2) { } #endif }; void testme() { test("test", mbOne); } I can work around this by specifying a reference 'ie. basic_string &myString' but not if it is 'const basic_string &myString'. Also calling explicitly via "test((basic_string)"test", mbOne);" also works. I suspect this has something to do with every expression/type being resolved to a bool via an inherent '!=0'. Curious for comments all the same :)

    Read the article

  • ViewModel Views relation/link/syncroniztion

    - by mehran
    Third try to describing problem: Try 1: Sunchronizing view model and view Try2: WPF ViewModel not active presenter Try3: I have some class for view models: public class Node : INotifyPropertyChanged { Guid NodeId { get; set; } public string Name { get; set; } } public class Connection: INotifyPropertyChanged { public Node StartNode { get; set; } public Node EndNode { get; set; } } public class SettingsPackModel { public List<Node> Nodes { get; private set; } public List<Connection> Connections { get; private set; } } I also have some templates to displays these models: <DataTemplate DataType="{x:Type vm:Node}">…</DataTemplate> <DataTemplate DataType="{x:Type vm:Connection}"> <my:ConnectionElment StartNodeElment="???" EndNodeElment="???"> </my:ConnectionElment> <DataTemplate> But the problem is that DataTemplate for Connection need reference ot two element of type UIElement , how can I pass these two, how can I fill ??? in above expression?

    Read the article

  • Passing Derived Class Instances as void* to Generic Callbacks in C++

    - by Matthew Iselin
    This is a bit of an involved problem, so I'll do the best I can to explain what's going on. If I miss something, please tell me so I can clarify. We have a callback system where on one side a module or application provides a "Service" and clients can perform actions with this Service (A very rudimentary IPC, basically). For future reference let's say we have some definitions like so: typedef int (*callback)(void*); // This is NOT in our code, but makes explaining easier. installCallback(string serviceName, callback cb); // Really handled by a proper management system sendMessage(string serviceName, void* arg); // arg = value to pass to callback This works fine for basic types such as structs or builtins. We have an MI structure a bit like this: Device <- Disk <- MyDiskProvider class Disk : public virtual Device class MyDiskProvider : public Disk The provider may be anything from a hardware driver to a bit of glue that handles disk images. The point is that classes inherit Disk. We have a "service" which is to be notified of all new Disks in the system, and this is where things unravel: void diskHandler(void *p) { Disk *pDisk = reinterpret_cast<Disk*>(p); // Uh oh! // Remainder is not important } SomeDiskProvider::initialise() { // Probe hardware, whatever... // Tell the disk system we're here! sendMessage("disk-handler", reinterpret_cast<void*>(this)); // Uh oh! } The problem is, SomeDiskProvider inherits Disk, but the callback handler can't receive that type (as the callback function pointer must be generic). Could RTTI and templates help here? Any suggestions would be greatly appreciated.

    Read the article

< Previous Page | 696 697 698 699 700 701 702 703 704 705 706 707  | Next Page >