Search Results

Search found 8397 results on 336 pages for 'implementation'.

Page 298/336 | < Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >

  • How to copy DispatcherObject (BitmapSource) into different thread?

    - by Tomáš Kafka
    Hi, I am trying to figure out how can I copy DispatcherObject (in my case BitmapSource) into another thread. Use case: I have a WPF app that needs to show window in a new thread (the app is actually Outlook addin and we need to do this because Outlook has some hooks in the main UI thread and is stealing certain hotkeys that we need to use - 'lost in translation' in interop of Outlook, WPF (which we use for UI), and Winforms (we need to use certain microsoft-provided winforms controls)). With that, I have my implementation of WPFMessageBox, that is configured by setting some static properties - and and one of them is BitmapSource for icon. This is used so that in startup I can set WPFMessageBox.Icon once, and since then, every WPFMessageBox will have the same icon. The problem is that BitmapSource, which is assigned into icon, is a DispatcherObject, and when read, it will throw InvalidOperationException: "The calling thread cannot access this object because a different thread owns it.". How can I clone that BitmapSource into the actual thread? It has Clone() and CloneCurrentValue() methods, which don't work (they throw the same exception as well). It also occured to me to use originalIcon.Dispatcher.Invoke( do the cloning here ) - but the BitmapSource's Dispatcher is null, and still - I'd create a copy on a wrong thread and still couldnt use it on mine. BitmapSource.IsFrozen == true. Any idea on how to copy the BitmapSource into different thread (without completely reconstructing it from an image file in a new thread)? EDIT: So, freezing does not help: In the end I have a BitmapFrame (Window.Icon doesn't take any other kind of ImageSource anyway), and when I assign it as a Window.Icon on a different thread, even if frozen, I get InvalidOperationException: "The calling thread cannot access this object because a different thread owns it." with a following stack trace: WindowsBase.dll!System.Windows.Threading.Dispatcher.VerifyAccess() + 0x4a bytes WindowsBase.dll!System.Windows.Threading.DispatcherObject.VerifyAccess() + 0xc bytes PresentationCore.dll!System.Windows.Media.Imaging.BitmapDecoder.Frames.get() + 0xe bytes PresentationFramework.dll!MS.Internal.AppModel.IconHelper.GetIconHandlesFromBitmapFrame(object callingObj = {WPFControls.WPFMBox.WpfMessageBoxWindow: header}, System.Windows.Media.Imaging.BitmapFrame bf = {System.Windows.Media.Imaging.BitmapFrameDecode}, ref MS.Win32.NativeMethods.IconHandle largeIconHandle = {MS.Win32.NativeMethods.IconHandle}, ref MS.Win32.NativeMethods.IconHandle smallIconHandle = {MS.Win32.NativeMethods.IconHandle}) + 0x3b bytes > PresentationFramework.dll!System.Windows.Window.UpdateIcon() + 0x118 bytes PresentationFramework.dll!System.Windows.Window.SetupInitialState(double requestedTop = NaN, double requestedLeft = NaN, double requestedWidth = 560.0, double requestedHeight = NaN) + 0x8a bytes PresentationFramework.dll!System.Windows.Window.CreateSourceWindowImpl() + 0x19b bytes PresentationFramework.dll!System.Windows.Window.SafeCreateWindow() + 0x29 bytes PresentationFramework.dll!System.Windows.Window.ShowHelper(object booleanBox) + 0x81 bytes PresentationFramework.dll!System.Windows.Window.Show() + 0x48 bytes PresentationFramework.dll!System.Windows.Window.ShowDialog() + 0x29f bytes WPFControls.dll!WPFControls.WPFMBox.WpfMessageBox.ShowDialog(System.Windows.Window owner = {WPFControlsTest.MainWindow}) Line 185 + 0x10 bytes C#

    Read the article

  • Parsing language for both binary and character files

    - by Thorsten S.
    The problem: You have some data and your program needs specified input. For example strings which are numbers. You are searching for a way to transform the original data in a format you need. And the problem is: The source can be anything. It can be XML, property lists, binary which contains the needed data deeply embedded in binary junk. And your output format may vary also: It can be number strings, float, doubles.... You don't want to program. You want routines which gives you commands capable to transform the data in a form you wish. Surely it contains regular expressions, but it is very good designed and it offers capabilities which are sometimes much more easier and more powerful. Something like a super-grep which you can access (!) as program routines, not only as tool. It allows: joining/grouping/merging of results inserting/deleting/finding/replacing write macros which allows to execute a command chain repeatedly meta-grouping (lists-tables-hypertables) Example (No, I am not looking for a solution to this, it is just an example): You want to read xml strings embedded in a binary file with variable length records. Your tool reads the record length and deletes the junk surrounding your text. Now it splits open the xml and extracts the strings. Being Indian number glyphs and containing decimal commas instead of decimal points, your tool transforms it into ASCII and replaces commas with points. Now the results must be stored into matrices of variable length....etc. etc. I am searching for a good language / language-design and if possible, an implementation. Which design do you like or even, if it does not fulfill the conditions, wouldn't you want to miss ? EDIT: The question is if a solution for the problem exists and if yes, which implementations are available. You DO NOT implement your own sorting algorithm if Quicksort, Mergesort and Heapsort is available. You DO NOT invent your own text parsing method if you have regular expressions. You DO NOT invent your own 3D language for graphics if OpenGL/Direct3D is available. There are existing solutions or at least papers describing the problem and giving suggestions. And there are people who may have worked and experienced such problems and who can give ideas and suggestions. The idea that this problem is totally new and I should work out and implement it myself without background knowledge seems for me, I must admit, totally off the mark.

    Read the article

  • WTF is wtf? (in WebKit code base)

    - by Motti
    I downloaded Chromium's code base and ran across the WTF namespace. namespace WTF { /* * C++'s idea of a reinterpret_cast lacks sufficient cojones. */ template<typename TO, typename FROM> TO bitwise_cast(FROM in) { COMPILE_ASSERT(sizeof(TO) == sizeof(FROM), WTF_wtf_reinterpret_cast_sizeof_types_is_equal); union { FROM from; TO to; } u; u.from = in; return u.to; } } // namespace WTF Does this mean what I think it means? Could be so, the bitwise_cast implementation specified here will not compile if either TO or FROM is not a POD and is not (AFAIK) more powerful than C++ built in reinterpret_cast. The only point of light I see here is the nobody seems to be using bitwise_cast in the Chromium project. I see there's some legalese so I'll put in the little letters to keep out of trouble. /* * Copyright (C) 2008 Apple Inc. All Rights Reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */

    Read the article

  • CSS3's border-radius property and border-collapse:collapse don't mix. How can I use border-radius to

    - by vamin
    Edit - Original Title: Is there an alternative way to achieve border-collapse:collapse in CSS (in order to have a collapsed, rounded corner table)? Since it turns out that simply getting the table's borders to collapse does not solve the root problem, I have updated the title to better reflect the discussion. I am trying to make a table with rounded corners using the CSS3 border-radius property. The table styles I'm using look something like this: table { -moz-border-radius:10px; -webkit-border-radius:10px; border-radius:10px} Here's the problem. I also want to set the border-collapse:collapse property, and when that is set border-radius no longer works (at least in Firefox)(edit- I thought this might just be a difference in mozilla's implementation, but it turns out this is the way it's supposed to work according to the w3c). Is there a CSS-based way I can get the same effect as border-collapse:collapse without actually using it? Edits: I've made a simple page to demonstrate the problem here (Firefox/Safari only). It seems that a large part of the problem is that setting the table to have rounded corners does not affect the corners of the corner td elements. If the table was all one color, this wouldn't be a problem since I could just make the top and bottom td corners rounded for the first and last row respectively. However, I am using different background colors for the table to differentiate the headings and for striping, so the inner td elements would show their rounded corners as well. Summary of proposed solutions: Surrounding the table with another element with round corners doesn't work because the table's square corners "bleed through." Specifying border width to 0 doesn't collapse the table. Bottom td corners still square after setting cellspacing to zero. Using javascript instead- works by avoiding the problem. Possible solutions: The tables are generated in php, so I could just apply a different class to each of the outer th/tds and style each corner separately. I'd rather not do this, since it's not very elegant and a bit of a pain to apply to multiple tables, so please keep suggestions coming. Possible solution 2 is to use javascript (jQuery, specifically) to style the corners. This solution also works, but still not quite what I'm looking for (I know I'm picky). I have two reservations: 1) this is a very lightweight site, and I'd like to keep javascript to the barest minimum 2) part of the appeal that using border-radius has for me is graceful degradation and progressive enhancement. By using border-radius for all rounded corners, I hope to have a consistently rounded site in CSS3-capable browsers and a consistently square site in others (I'm looking at you, IE). I know that trying to do this with CSS3 today may seem needless, but I have my reasons. I would also like to point out that this problem is a result of the w3c speficication, not poor CSS3 support, so any solution will still be relevant and useful when CSS3 has more widespread support.

    Read the article

  • Getting RSSIValue from IOBluetoothHostController

    - by Tanner Ezell
    I'm trying to write a simple application that gathers the RSSIValue and displays it via NSLog, my code is as follows: #import <Foundation/Foundation.h> #import <Cocoa/Cocoa.h> #import <IOBluetooth/objc/IOBluetoothDeviceInquiry.h> #import <IOBluetooth/objc/IOBluetoothDevice.h> #import <IOBluetooth/objc/IOBluetoothHostController.h> #import <IOBluetooth/IOBluetoothUtilities.h> @interface getRSSI: NSObject {} -(void) readRSSIForDeviceComplete:(id)controller device:(IOBluetoothDevice*)device info:(BluetoothHCIRSSIInfo*)info error:(IOReturn)error; @end @implementation getRSSI - (void) readRSSIForDeviceComplete:(id)controller device:(IOBluetoothDevice*)device info:(BluetoothHCIRSSIInfo*)info error:(IOReturn)error { if (error != kIOReturnSuccess) { NSLog(@"readRSSIForDeviceComplete return error"); CFRunLoopStop(CFRunLoopGetCurrent()); } if (info->handle == kBluetoothConnectionHandleNone) { NSLog(@"readRSSIForDeviceComplete no handle"); CFRunLoopStop(CFRunLoopGetCurrent()); } NSLog(@"RSSI = %i dBm ", info->RSSIValue); [NSThread sleepUntilDate: [NSDate dateWithTimeIntervalSinceNow: 5]]; [device closeConnection]; [device openConnection]; [controller readRSSIForDevice:device]; } @end int main (int argc, const char * argv[]) { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; NSLog(@"start"); IOBluetoothHostController *hci = [IOBluetoothHostController defaultController]; NSString *addrStr = @"xx:xx:xx:xx:xx:xx"; BluetoothDeviceAddress addr; IOBluetoothNSStringToDeviceAddress(addrStr, &addr); IOBluetoothDevice *device = [[IOBluetoothDevice alloc] init]; device = [IOBluetoothDevice withAddress:&addr]; [device retain]; [device openConnection]; getRSSI *rssi = [[getRSSI alloc] init]; [hci setDelegate:rssi]; [hci readRSSIForDevice:device]; CFRunLoopRun(); [hci release]; [rssi release]; [pool release]; return 0; } The problem I am facing is that the readRSSIForDeviceComplete seems to work just fine, info passes along a value. The problem is that the RSSI value is drastically different from the one I can view from OS X via option clicking the bluetooth icon at the top. It is typical for my application to print off 1,2,-1,-8,etc while the menu displays -64 dBm, -66, -70, -42, etc. I would really appreciate some guidance.

    Read the article

  • Problem with Landscape and Portrait view in a TabBar Application

    - by JoshD
    I have an TabBar app that I would like to be in landscape and portrait. The issue is when I go to tab 2 make a selection from my table then select tab 1 and rotate the device, then select tab 2 again the content does not know that the device rotated and will not display my custom orientated content correctly. I am trying to write a priovate method that tells the view what orientation it is currently in. IN viewDidLoad I am assuming it is in portrait but in shouldAutoRotate I have it looking in the private method for the correct alignment of the content. Please Help!! Here is my code: #import "DetailViewController.h" #import "ScheduleTableViewController.h" #import "BrightcoveDemoAppDelegate.h" #import "Constants.h" @implementation DetailViewController @synthesize CurrentLevel, CurrentTitle, tableDataSource,logoName,showDescription,showDescriptionInfo,showTime, showTimeInfo, tableBG; - (void)layoutSubviews { showLogo.frame = CGRectMake(40, 20, 187, 101); showDescription.frame = CGRectMake(85, 140, 330, 65); showTime.frame = CGRectMake(130, 10, 149, 119); tableBG.frame = CGRectMake(0, 0, 480, 320); } /* // The designated initializer. Override if you create the controller programmatically and want to perform customization that is not appropriate for viewDidLoad. - (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil { if (self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil]) { // Custom initialization } return self; } */ /* // Implement loadView to create a view hierarchy programmatically, without using a nib. - (void)loadView { } */ // Implement viewDidLoad to do additional setup after loading the view, typically from a nib. - (void)viewDidLoad { [super viewDidLoad]; self.navigationItem.title = CurrentTitle; [showDescription setEditable:NO]; //show the description showDescription.text = showDescriptionInfo; showTime.text = showTimeInfo; NSString *Path = [[NSBundle mainBundle] bundlePath]; NSString *ImagePath = [Path stringByAppendingPathComponent:logoName]; UIImage *tempImg = [[UIImage alloc] initWithContentsOfFile:ImagePath]; [showLogo setImage:tempImg]; [tempImg release]; [self masterView]; } - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { return YES; } - (void)willAnimateRotationToInterfaceOrientation:(UIInterfaceOrientation)toInterfaceOrientation duration:(NSTimeInterval)duration { isLandscape = UIInterfaceOrientationIsLandscape(toInterfaceOrientation); if(isLandscape = YES){ [self layoutSubviews]; } } - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Release any cached data, images, etc that aren't in use. } - (void)viewDidUnload { // Release any retained subviews of the main view. // e.g. self.myOutlet = nil; } - (void)dealloc { [logoName release]; [showLogo release]; [showDescription release]; [showDescriptionInfo release]; [super dealloc]; } @end

    Read the article

  • Fleunt NHibernate not working outside of nunit test fixtures

    - by thorkia
    Okay, here is my problem... I created a Data Layer using the RTM Fluent Nhibernate. My create session code looks like this: _session = Fluently.Configure(). Database(SQLiteConfiguration.Standard.UsingFile("Data.s3db")) .Mappings( m => { m.FluentMappings.AddFromAssemblyOf<ProductMap>(); m.FluentMappings.AddFromAssemblyOf<ProductLogMap>(); }) .ExposeConfiguration(BuildSchema) .BuildSessionFactory(); When I reference the module in a test project, then create a test fixture that looks something like this: [Test] public void CanAddProduct() { var product = new Product {Code = "9", Name = "Test 9"}; IProductRepository repository = new ProductRepository(); repository.AddProduct(product); using (ISession session = OrmHelper.OpenSession()) { var fromDb = session.Get<Product>(product.Id); Assert.IsNotNull(fromDb); Assert.AreNotSame(fromDb, product); Assert.AreEqual(fromDb.Id, product.Id); } My tests pass. When I open up the created SQLite DB, the new Product with Code 9 is in it. the tables for Product and ProductLog are there. Now, when I create a new console application, and reference the same library, do something like this: Product product = new Product() {Code = "10", Name = "Hello"}; IProductRepository repository = new ProductRepository(); repository.AddProduct(product); Console.WriteLine(product.Id); Console.ReadLine(); It doesn't work. I actually get pretty nasty exception chain. To save you lots of head aches, here is the summary: Top Level exception: An invalid or incomplete configuration was used while creating a SessionFactory. Check PotentialReasons collection, and InnerException for more detail.\r\n\r\n The PotentialReasons collection is empty The Inner exception: The IDbCommand and IDbConnection implementation in the assembly System.Data.SQLite could not be found. Ensure that the assembly System.Data.SQLite is located in the application directory or in the Global Assembly Cache. If the assembly is in the GAC, use element in the application configuration file to specify the full name of the assembly. Both the unit test library and the console application reference the exact same version of System.Data.SQLite. Both projects have the exact same DLLs in the debug folder. I even tried copying SQLite DB the unit test library created into the debug directory of the console app, and removed the build schema lines and it still fails If anyone can help me figure out why this won't work outside of my unit tests it would be greatly appreciated. This crazy bug has me at a stand still.

    Read the article

  • How to combine designable components with dependency injection

    - by Wim Coenen
    When creating a designable .NET component, you are required to provide a default constructor. From the IComponent documentation: To be a component, a class must implement the IComponent interface and provide a basic constructor that requires no parameters or a single parameter of type IContainer. This makes it impossible to do dependency injection via constructor arguments. (Extra constructors could be provided, but the designer would ignore them.) Some alternatives we're considering: Service Locator Don't use dependency injection, instead use the service locator pattern to acquire dependencies. This seems to be what IComponent.Site.GetService is for. I guess we could create a reusable ISite implementation (ConfigurableServiceLocator?) which can be configured with the necessary dependencies. But how does this work in a designer context? Dependency Injection via properties Inject dependencies via properties. Provide default instances if they are necessary to show the component in a designer. Document which properties need to be injected. Inject dependencies with an Initialize method This is much like injection via properties but it keeps the list of dependencies that need to be injected in one place. This way the list of required dependencies is documented implicitly, and the compiler will assists you with errors when the list changes. Any idea what the best practice is here? How do you do it? edit: I have removed "(e.g. a WinForms UserControl)" since I intended the question to be about components in general. Components are all about inversion of control (see section 8.3.1 of the UMLv2 specification) so I don't think that "you shouldn't inject any services" is a good answer. edit 2: It took some playing with WPF and the MVVM pattern to finally "get" Mark's answer. I see now that visual controls are indeed a special case. As for using non-visual components on designer surfaces, I think the .NET component model is fundamentally incompatible with dependency injection. It appears to be designed around the service locator pattern instead. Maybe this will start to change with the infrastructure that was added in .NET 4.0 in the System.ComponentModel.Composition namespace.

    Read the article

  • Optimising speeds in HDF5 using Pytables

    - by Sree Aurovindh
    The problem is with respect to the writing speed of the computer (10 * 32 bit machine) and the postgresql query performance.I will explain the scenario in detail. I have data about 80 Gb (along with approprite database indexes in place). I am trying to read it from Postgresql database and writing it into HDF5 using Pytables.I have 1 table and 5 variable arrays in one hdf5 file.The implementation of Hdf5 is not multithreaded or enabled for symmetric multi processing.I have rented about 10 computers for a day and trying to write them inorder to speed up my data handling. As for as the postgresql table is concerned the overall record size is 140 million and I have 5 primary- foreign key referring tables.I am not using joins as it is not scalable So for a single lookup i do 6 lookup without joins and write them into hdf5 format. For each lookup i do 6 inserts into each of the table and its corresponding arrays. The queries are really simple select * from x.train where tr_id=1 (primary key & indexed) select q_t from x.qt where q_id=2 (non-primary key but indexed) (similarly five queries) Each computer writes two hdf5 files and hence the total count comes around 20 files. Some Calculations and statistics: Total number of records : 14,37,00,000 Total number of records per file : 143700000/20 =71,85,000 The total number of records in each file : 71,85,000 * 5 = 3,59,25,000 Current Postgresql database config : My current Machine : 8GB RAM with i7 2nd generation Processor. I made changes to the following to postgresql configuration file : shared_buffers : 2 GB effective_cache_size : 4 GB Note on current performance: I have run it for about ten hours and the performance is as follows: The total number of records written for each file is about 6,21,000 * 5 = 31,05,000 The bottle neck is that i can only rent it for 10 hours per day (overnight) and if it processes in this speed it will take about 11 days which is too high for my experiments. Please suggest me on how to improve. Questions: 1. Should i use Symmetric multi processing on those desktops(it has 2 cores with about 2 GB of RAM).In that case what is suggested or prefereable? 2. If i change my postgresql configuration file and increase the RAM will it enhance my process. 3. Should i use multi threading.. In that case any links or pointers would be of great help Thanks Sree aurovindh V

    Read the article

  • Executing a .NET Managed Assembly from SQL Server 2008 - Pro's, Con's & Recommendations

    - by RPM1984
    Hi guys, looking for opinions/recommendations/links for the following scenario im currently facing. The Platform: .NET 4.0 Web Application SQL Server 2008 The Task: Overhaul a component of the system that performs (fairly) complex mathematical operations based on a specific user activity, and updates numerous tables in the database. A common user activity might be "Bob" decides to post a forum topic. This results in (the end-solution) needing to look at various factors (about the post he did), then after doing some math based on lookup values/ratios as well as other data in the database, inserting some other data as a result of these operations. The Options: Ok - so here's what im thinking. Although it would be much easier to do this in C# (LINQ-SQL) it doesnt make much sense as the majority of the computations are based on values in the db, and it will get difficult to control/optimize/debug the LINQ over time. Hence, im leaning towards created a managed assembly (C# Class Library) that contains the lookup values (constants) as well as leveraging the math classes in the existing .NET BCL. Basically i'd expose a few methods that can be called by the T-SQL Stored Procedures. This to me has the following advantages: Simplicity of math. Do complex math in .NET vs complex math in T-SQL. No brainer. =) Abstraction of computatations, configurable "lookup" values and business logic from raw T-SQL. T-SQL only needs to care about the data, simplifying the stored procedures and making it easier to maintain. When it needs to do math it delegates off to the managed assembly. So, having said that - ive never done this before (call .NET assmembly from T-SQL), and after some googling the best site i could come up with is here, which is useful but outdated. So - what am i asking? Well, firstly - i need some better references on how to actually do this. "This" being how to call a C# .NET 4 Assembly from within T-SQL Stored Procedures in SQL Server 2008. Secondly, who out there has done this, what problems (if any) did you face? Realize this may be difficult to provide a "correct answer", so ill try to give it to whoever gives me the answer with a combination of good links and a list of pro's/con's/problems with this implementation. Cheers!

    Read the article

  • XPointers in SVG

    - by Nycto
    I've been trying to get XPointer URIs working in an SVG file, but haven't had any luck so far. After trying something more complicated and failing, I simplified it down to just referencing an ID. However, this still fails. The spec seems pretty clear about this implementation: http://www.w3.org/TR/SVG/struct.html#URIReference I found an example online of what should be a working XPointer reference within an svg document. Here is the Original. Here is the version I copied out: <?xml version="1.0" standalone="no"?> <!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd"> <svg width="500" height="200" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"> <defs> <rect id="simpleRect" width="100px" height="75px"/> </defs> <use xlink:href="#simpleRect" x="50" y="50" style="fill:red"/> <use xlink:href="#xpointer(id('simpleRect'))" x="250" y="50" style="fill:yellow"/> </svg> This should display two rectangles... one red and one yellow. I tried rendering with Firefox 3.6 and Inkscape 0.47. No success. Only the Red rectangle shows. What am I missing? Thanks for any help you can offer

    Read the article

  • Why does Git.pm on cygwin complain about 'Out of memory during "large" request?

    - by Charles Ma
    Hi, I'm getting this error while doing a git svn rebase in cygwin Out of memory during "large" request for 268439552 bytes, total sbrk() is 140652544 bytes at /usr/lib/perl5/site_perl/Git.pm line 898, <GEN1> line 3. 268439552 is 256MB. Cygwin's maxium memory size is set to 1024MB so I'm guessing that it has a different maximum memory size for perl? How can I increase the maximum memory size that perl programs can use? update: This is where the error occurs (in Git.pm): while (1) { my $bytesLeft = $size - $bytesRead; last unless $bytesLeft; my $bytesToRead = $bytesLeft < 1024 ? $bytesLeft : 1024; my $read = read($in, $blob, $bytesToRead, $bytesRead); //line 898 unless (defined($read)) { $self->_close_cat_blob(); throw Error::Simple("in pipe went bad"); } $bytesRead += $read; } I've added a print before line 898 to print out $bytesToRead and $bytesRead and the result was 1024 for $bytesToRead, and 134220800 for $bytesRead, so it's reading 1024 bytes at a time and it has already read 128MB. Perl's 'read' function must be out of memory and is trying to request for double it's memory size...is there a way to specify how much memory to request? or is that implementation dependent? UPDATE2: While testing memory allocation in cygwin: This C program's output was 1536MB int main() { unsigned int bit=0x40000000, sum=0; char *x; while (bit > 4096) { x = malloc(bit); if (x) sum += bit; bit >>= 1; } printf("%08x bytes (%.1fMb)\n", sum, sum/1024.0/1024.0); return 0; } While this perl program crashed if the file size is greater than 384MB (but succeeded if the file size was less). open(F, "<400") or die("can't read\n"); $size = -s "400"; $read = read(F, $s, $size); The error is similar Out of memory during "large" request for 536875008 bytes, total sbrk() is 217088 bytes at mem.pl line 6.

    Read the article

  • Adding NavigationControl to a TabBar Application containing UITableViews

    - by kungfuslippers
    Hi, I'm new to iPhone dev and wanted to get advice on the general design pattern / guide for putting a certain kind of app together. I'm trying to build a TabBar type application. One of the tabs needs to display a TableView and selecting a cell from within the table view will do something else - maybe show another table view or a web page. I need a Navigation Bar to be able to take me back from the table view/web page. The approach I've taken so far is to: Create an app based around UITabBarController as the rootcontroller i.e. @interface MyAppDelegate : NSObject <UIApplicationDelegate> { IBOutlet UIWindow *window; IBOutlet UITabBarController *rootController; } Create a load of UIViewController derived classes and associated NIBs and wire everything up in IB so when I run the app I get the basic tabs working. I then take the UIViewController derived class and modify it to the following: @interface MyViewController : UIViewController<UITableViewDataSource, UITableViewDelegate> { } and I add the delegate methods to the implementation of MyViewController - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { return 2; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; } if (indexPath.row == 0) { cell.textLabel.text = @"Mummy"; } else { cell.textLabel.text = @"Daddy"; } return cell; } Go back to IB , open MyViewController.xib and drop a UITableView onto it. Set the Files Owner to be MyViewController and then set the delegate and datasource of the UITableView to be MyViewController. If I run the app now, I get the table view appearing with mummy and daddy working nicely. So far so good. The question is how do I go about incorporating a Navigation Bar into my current code for when I implement: - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath() { // get row selected NSUInteger row = [indexPath row]; if (row == 0) { // Show another table } else if (row == 1) { // Show a web view } } Do I drop a NavigationBar UI control onto MyControllerView.xib ? Should I create it programmatically? Should I be using a UINavigationController somewhere ? I've tried dropping a NavigationBar onto my MyControllerView.xib in IB but its not shown when I run the app, only the TableView is displayed.

    Read the article

  • Escaping Code for Different Shells

    - by Jon Purdy
    Question: What characters do I need to escape in a user-entered string to securely pass it into shells on Windows and Unix? What shell differences and version differences should be taken into account? Can I use printf "%q" somehow, and is that reliable across shells? Backstory (a.k.a. Shameless Self-Promotion): I made a little DSL, the Vision Web Template Language, which allows the user to create templates for X(HT)ML documents and fragments, then automatically fill them in with content. It's designed to separate template logic from dynamic content generation, in the same way that CSS is used to separate markup from presentation. In order to generate dynamic content, a Vision script must defer to a program written in a language that can handle the generation logic, such as Perl or Python. (Aside: using PHP is also possible, but Vision is intended to solve some of the very problems that PHP perpetuates.) In order to do this, the script makes use of the @system directive, which executes a shell command and expands to its output. (Platform-specific generation can be handled using @unix or @windows, which only expand on the proper platform.) The problem is obvious, I should think: test.htm: <!-- ... --> <form action="login.vis" method="POST"> <input type="text" name="USERNAME"/> <input type="password" name="PASSWORD"/> </form> <!-- ... --> login.vis: #!/usr/bin/vision # Think USERNAME = ";rm -f;" @system './login.pl' { USERNAME; PASSWORD } One way to safeguard against this kind of attack is to set proper permissions on scripts and directories, but Web developers may not always set things up correctly, and the naive developer should get just as much security as the experienced one. The solution, logically, is to include a @quote directive that produces a properly escaped string for the current platform. @system './login.pl' { @quote : USERNAME; @quote : PASSWORD } But what should @quote actually do? It needs to be both cross-platform and secure, and I don't want to create terrible problems with a naive implementation. Any thoughts?

    Read the article

  • What's a clean way to break up a DataTable into chunks of a fixed size with Linq?

    - by Michael Haren
    Update: Here's a similar question Suppose I have a DataTable with a few thousand DataRows in it. I'd like to break up the table into chunks of smaller rows for processing. I thought C#3's improved ability to work with data might help. This is the skeleton I have so far: DataTable Table = GetTonsOfData(); // Chunks should be any IEnumerable<Chunk> type var Chunks = ChunkifyTableIntoSmallerChunksSomehow; // ** help here! ** foreach(var Chunk in Chunks) { // Chunk should be any IEnumerable<DataRow> type ProcessChunk(Chunk); } Any suggestions on what should replace ChunkifyTableIntoSmallerChunksSomehow? I'm really interested in how someone would do this with access C#3 tools. If attempting to apply these tools is inappropriate, please explain! Update 3 (revised chunking as I really want tables, not ienumerables; going with an extension method--thanks Jacob): Final implementation: Extension method to handle the chunking: public static class HarenExtensions { public static IEnumerable<DataTable> Chunkify(this DataTable table, int chunkSize) { for (int i = 0; i < table.Rows.Count; i += chunkSize) { DataTable Chunk = table.Clone(); foreach (DataRow Row in table.Select().Skip(i).Take(chunkSize)) { Chunk.ImportRow(Row); } yield return Chunk; } } } Example consumer of that extension method, with sample output from an ad hoc test: class Program { static void Main(string[] args) { DataTable Table = GetTonsOfData(); foreach (DataTable Chunk in Table.Chunkify(100)) { Console.WriteLine("{0} - {1}", Chunk.Rows[0][0], Chunk.Rows[Chunk.Rows.Count - 1][0]); } Console.ReadLine(); } static DataTable GetTonsOfData() { DataTable Table = new DataTable(); Table.Columns.Add(new DataColumn()); for (int i = 0; i < 1000; i++) { DataRow Row = Table.NewRow(); Row[0] = i; Table.Rows.Add(Row); } return Table; } }

    Read the article

  • Unique_ptr compiler errors

    - by Godric Seer
    I am designing and entity-component system for a project, and C++ memory management is giving me a few issues. I just want to make sure my design is legitimate. So to start I have an Entity class which stores a vector of Components: class Entity { private: std::vector<std::unique_ptr<Component> > components; public: Entity() { }; void AddComponent(Component* component) { this -> components.push_back(std::unique_ptr<Component>(component)); } ~Entity(); }; Which if I am not mistaken means that when the destructor is called (even the default, compiler created one), the destructor for the Entity, will call ~components, which will call ~std::unique_ptr for each element in the vector, and lead to the destruction of each Component, which is what I want. The component class has virtual methods, but the important part is its constructor: Component::Component(Entity parent) { parent.addComponent(this) // I am not sure if this would work like I expect // Other things here } As long as passing this to the method works, this also does what I want. My confusion is in the factory. What I want to do is something along the lines of: std::shared_ptr<Entity> createEntity() { std::shared_ptr<Entity> entityPtr(new Entity()); new Component(*parent); // Initialize more, and other types of Components return entityPtr; } Now, I believe that this setup will leave the ownership of the Component in the hands of its Parent Entity, which is what I want. First a small question, do I need to pass the entity into the Component constructor by reference or pointer or something? If I understand C++, it would pass by value, which means it gets copied, and the copied entity would die at the end of the constructor. The second, and main question is that code based on this sample will not compile. The complete error is too large to print here, however I think I know somewhat of what is going on. The compiler's error says I can't delete an incomplete type. My Component class has a purely virtual destructor with an implementation: inline Component::~Component() { }; at the end of the header. However since the whole point is that Component is actually an interface. I know from here that a complete type is required for unique_ptr destruction. The question is, how do I work around this? For reference I am using gcc 4.4.6.

    Read the article

  • How does the rsync algorithm correctly identify repeating blocks?

    - by Kai
    I'm on a personal quest to learn how the rsync algorithm works. After some reading and thinking, I've come up with a situation where I think the algorithm fails. I'm trying to figure out how this is resolved in an actual implementation. Consider this example, where A is the receiver and B is the sender. A = abcde1234512345fghij B = abcde12345fghij As you can see, the only change is that 12345 has been removed. Now, to make this example interesting, let's choose a block size of 5 bytes (chars). Hashing the values on the sender's side using the weak checksum gives the following values list. abcde|12345|fghij abcde -> 495 12345 -> 255 fghij -> 520 values = [495, 255, 520] Next we check to see if any hash values differ in A. If there's a matching block we can skip to the end of that block for the next check. If there's a non-matching block then we've found a difference. I'll step through this process. Hash the first block. Does this hash exist in the values list? abcde -> 495 (yes, so skip) Hash the second block. Does this hash exist in the values list? 12345 -> 255 (yes, so skip) Hash the third block. Does this hash exist in the values list? 12345 -> 255 (yes, so skip) Hash the fourth block. Does this hash exist in the values list? fghij -> 520 (yes, so skip) No more data, we're done. Since every hash was found in the values list, we conclude that A and B are the same. Which, in my humble opinion, isn't true. It seems to me this will happen whenever there is more than one block that share the same hash. What am I missing?

    Read the article

  • Performance of SHA-1 Checksum from Android 2.2 to 2.3 and Higher

    - by sbrichards
    In testing the performance of: package com.srichards.sha; import android.app.Activity; import android.os.Bundle; import android.widget.TextView; import java.io.IOException; import java.io.InputStream; import java.security.MessageDigest; import java.security.NoSuchAlgorithmException; import java.util.zip.ZipEntry; import java.util.zip.ZipFile; import com.srichards.sha.R; public class SHAHashActivity extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); TextView tv = new TextView(this); String shaVal = this.getString(R.string.sha); long systimeBefore = System.currentTimeMillis(); String result = shaCheck(shaVal); long systimeResult = System.currentTimeMillis() - systimeBefore; tv.setText("\nRunTime: " + systimeResult + "\nHas been modified? | Hash Value: " + result); setContentView(tv); } public String shaCheck(String shaVal){ try{ String resultant = "null"; MessageDigest digest = MessageDigest.getInstance("SHA1"); ZipFile zf = null; try { zf = new ZipFile("/data/app/com.blah.android-1.apk"); // /data/app/com.blah.android-2.apk } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } ZipEntry ze = zf.getEntry("classes.dex"); InputStream file = zf.getInputStream(ze); byte[] dataBytes = new byte[32768]; //65536 32768 int nread = 0; while ((nread = file.read(dataBytes)) != -1) { digest.update(dataBytes, 0, nread); } byte [] rbytes = digest.digest(); StringBuffer sb = new StringBuffer(""); for (int i = 0; i< rbytes.length; i++) { sb.append(Integer.toString((rbytes[i] & 0xff) + 0x100, 16).substring(1)); } if (shaVal.equals(sb.toString())) { resultant = ("\nFalse : " + "\nFound:\n" + sb.toString() + "|" + "\nHave:\n" + shaVal); } else { resultant = ("\nTrue : " + "\nFound:\n" + sb.toString() + "|" + "\nHave:\n" + shaVal); } return resultant; } catch (IOException e) { e.printStackTrace(); } catch (NoSuchAlgorithmException e) { e.printStackTrace(); } return null; } } On a 2.2 Device I get average runtime of ~350ms, while on newer devices I get runtimes of 26-50ms which is substantially lower. I'm keeping in mind these devices are newer and have better hardware but am also wondering if the platform and the implementation affect performance much and if there is anything that could reduce runtimes on 2.2 devices. Note, the classes.dex of the .apk being accessed is roughly 4MB. Thanks!

    Read the article

  • EXC_BAD_ACCESS when I change moviePlayer contentURL

    - by Bruno
    Hello, In few words, my application is doing that : 1) My main view (MovieListController) has some video thumbnails and when I tap on one, it displays the moviePlayer (MoviePlayerViewController) : MovieListController.h : @interface MoviePlayerViewController : UIViewController <UITableViewDelegate>{ UIView *viewForMovie; MPMoviePlayerController *player; } @property (nonatomic, retain) IBOutlet UIView *viewForMovie; @property (nonatomic, retain) MPMoviePlayerController *player; - (NSURL *)movieURL; @end MovieListController.m : MoviePlayerViewController *controllerTV = [[MoviePlayerViewController alloc] initWithNibName:@"MoviePlayerViewController" bundle:nil]; controllerTV.delegate = self; controllerTV.modalTransitionStyle = UIModalTransitionStyleFlipHorizontal; [self presentModalViewController: controllerTV animated: YES]; [controllerTV release]; 2) In my moviePlayer, I initialize the video I want to play MoviePlayerViewController.m : @implementation MoviePlayerViewController @synthesize player; @synthesize viewForMovie; - (void)viewDidLoad { NSLog(@"start"); [super viewDidLoad]; self.player = [[MPMoviePlayerController alloc] init]; self.player.view.frame = self.viewForMovie.bounds; self.player.view.autoresizingMask = UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleHeight; [self.viewForMovie addSubview:player.view]; self.player.contentURL = [self movieURL]; } - (void)dealloc { NSLog(@"dealloc TV"); [player release]; [viewForMovie release]; [super dealloc]; } -(NSURL *)movieURL { NSBundle *bundle = [NSBundle mainBundle]; NSString *moviePath = [bundle pathForResource:@"FR_Tribord_Surf camp_100204" ofType:@"mp4"]; if (moviePath) { return [NSURL fileURLWithPath:moviePath]; } else { return nil; } } - It's working good, my movie is display My problem : When I go back to my main view : - (void) returnToMap: (MoviePlayerViewController *) controller { [self dismissModalViewControllerAnimated: YES]; } And I tap in a thumbnail to display again the moviePlayer (MoviePlayerViewController), I get a *Program received signal: “EXC_BAD_ACCESS”.* In my debugger I saw that it's stopping on the thread "main" : // // main.m // MoviePlayer // // Created by Eric Freeman on 3/27/10. // Copyright Apple Inc 2010. All rights reserved. // #import <UIKit/UIKit.h> int main(int argc, char *argv[]) { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; int retVal = UIApplicationMain(argc, argv, nil, nil); //EXC_BAD_ACCESS [pool release]; return retVal; } If I comment self.player.contentURL = [self movieURL]; it's working, but when I let it, iI have this problem. I read that it's due to null pointer or memory problem but I don't understand why it's working the first time and not the second time. I release my object in dealloc method. Thanks for your help ! Bruno.

    Read the article

  • Strange problems with PHP SOAP (private variable not persist + variables passed from client not work

    - by Tamas Gal
    I have a very strange problems in a PHP Soap implementation. I have a private variable in the Server class which contains the DB name for further reference. The private variable name is "fromdb". I have a public function on the soap server where I can set this variable. $client-setFromdb. When I call it form my client works perfectly and the fromdb private variable can be set. But a second soap client call this private variable loses its value... Here is my soap server setup: ini_set('soap.wsdl_cache_enabled', 0); ini_set('session.auto_start', 0); ini_set('always_populate_raw_post_data', 1); global $config_dir; session_start(); /*if(!$HTTP_RAW_POST_DATA){ $HTTP_RAW_POST_DATA = file_get_contents('php://input'); }*/ $server = new SoapServer("{$config_dir['template']}import.wsdl"); $server-setClass('import'); $server-setPersistence(SOAP_PERSISTENCE_SESSION); $server-handle(); Problem is that I passed this to the server: $client = new SoapClient('http://import.ingatlan.net/wsdl', array('trace' = 1)); $xml=''; $xml.=''; $xml.=''; $xml.=''; $xml.='Valaki'; $xml.=''; $xml.=''; $xml.=''; $xml.=''; $tarray = array("type" = 1, "xml" = $xml); try{ $s = $client-sendXml( $tarray ); print "$s"; } catch( SOAPFault $exception){ print "--- SOAP exception :{$exception}---"; print "LAST REQUEST :"; var_dump($client-_getLastRequest()); print "---"; print "LAST RESPONSE :".$client-_getLastResponse(); } So passed an Array of informations to the server. Then I got this exception: LAST REQUEST : <?xml version="1.0" encoding="UTF-8"?> <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"><SOAP-ENV:Body><type>Array</type><xml/></SOAP-ENV:Body></SOAP-ENV:Envelope> Can you see the Array word between the type tag? Seems that the client only passed a reference or something like this. So I totaly missed :(

    Read the article

  • idiomatic property changed notification in scala?

    - by Jeremy Bell
    I'm trying to find a cleaner alternative (that is idiomatic to Scala) to the kind of thing you see with data-binding in WPF/silverlight data-binding - that is, implementing INotifyPropertyChanged. First, some background: In .Net WPF or silverlight applications, you have the concept of two-way data-binding (that is, binding the value of some element of the UI to a .net property of the DataContext in such a way that changes to the UI element affect the property, and vise versa. One way to enable this is to implement the INotifyPropertyChanged interface in your DataContext. Unfortunately, this introduces a lot of boilerplate code for any property you add to the "ModelView" type. Here is how it might look in Scala: trait IDrawable extends INotifyPropertyChanged { protected var drawOrder : Int = 0 def DrawOrder : Int = drawOrder def DrawOrder_=(value : Int) { if(drawOrder != value) { drawOrder = value OnPropertyChanged("DrawOrder") } } protected var visible : Boolean = true def Visible : Boolean = visible def Visible_=(value: Boolean) = { if(visible != value) { visible = value OnPropertyChanged("Visible") } } def Mutate() : Unit = { if(Visible) { DrawOrder += 1 // Should trigger the PropertyChanged "Event" of INotifyPropertyChanged trait } } } For the sake of space, let's assume the INotifyPropertyChanged type is a trait that manages a list of callbacks of type (AnyRef, String) = Unit, and that OnPropertyChanged is a method that invokes all those callbacks, passing "this" as the AnyRef, and the passed-in String). This would just be an event in C#. You can immediately see the problem: that's a ton of boilerplate code for just two properties. I've always wanted to write something like this instead: trait IDrawable { val Visible = new ObservableProperty[Boolean]('Visible, true) val DrawOrder = new ObservableProperty[Int]('DrawOrder, 0) def Mutate() : Unit = { if(Visible) { DrawOrder += 1 // Should trigger the PropertyChanged "Event" of ObservableProperty class } } } I know that I can easily write it like this, if ObservableProperty[T] has Value/Value_= methods (this is the method I'm using now): trait IDrawable { // on a side note, is there some way to get a Symbol representing the Visible field // on the following line, instead of hard-coding it in the ObservableProperty // constructor? val Visible = new ObservableProperty[Boolean]('Visible, true) val DrawOrder = new ObservableProperty[Int]('DrawOrder, 0) def Mutate() : Unit = { if(Visible.Value) { DrawOrder.Value += 1 } } } // given this implementation of ObservableProperty[T] in my library // note: IEvent, Event, and EventArgs are classes in my library for // handling lists of callbacks - they work similarly to events in C# class PropertyChangedEventArgs(val PropertyName: Symbol) extends EventArgs("") class ObservableProperty[T](val PropertyName: Symbol, private var value: T) { protected val propertyChanged = new Event[PropertyChangedEventArgs] def PropertyChanged: IEvent[PropertyChangedEventArgs] = propertyChanged def Value = value; def Value_=(value: T) { if(this.value != value) { this.value = value propertyChanged(this, new PropertyChangedEventArgs(PropertyName)) } } } But is there any way to implement the first version using implicits or some other feature/idiom of Scala to make ObservableProperty instances function as if they were regular "properties" in scala, without needing to call the Value methods? The only other thing I can think of is something like this, which is more verbose than either of the above two versions, but is still less verbose than the original: trait IDrawable { private val visible = new ObservableProperty[Boolean]('Visible, false) def Visible = visible.Value def Visible_=(value: Boolean): Unit = { visible.Value = value } private val drawOrder = new ObservableProperty[Int]('DrawOrder, 0) def DrawOrder = drawOrder.Value def DrawOrder_=(value: Int): Unit = { drawOrder.Value = value } def Mutate() : Unit = { if(Visible) { DrawOrder += 1 } } }

    Read the article

  • Constructor versus setter injection

    - by Chris
    Hi, I'm currently designing an API where I wish to allow configuration via a variety of methods. One method is via an XML configuration schema and another method is through an API that I wish to play nicely with Spring. My XML schema parsing code was previously hidden and therefore the only concern was for it to work but now I wish to build a public API and I'm quite concerned about best-practice. It seems that many favor javabean type PoJo's with default zero parameter constructors and then setter injection. The problem I am trying to tackle is that some setter methods implementations are dependent on other setter methods being called before them in sequence. I could write anal setters that will tolerate themselves being called in many orders but that will not solve the problem of a user forgetting to set the appropriate setter and therefore the bean being in an incomplete state. The only solution I can think of is to forget about the objects being 'beans' and enforce the required parameters via constructor injection. An example of this is in the default setting of the id of a component based on the id of the parent components. My Interface public interface IMyIdentityInterface { public String getId(); /* A null value should create a unique meaningful default */ public void setId(String id); public IMyIdentityInterface getParent(); public void setParent(IMyIdentityInterface parent); } Base Implementation of interface: public abstract class MyIdentityBaseClass implements IMyIdentityInterface { private String _id; private IMyIdentityInterface _parent; public MyIdentityBaseClass () {} @Override public String getId() { return _id; } /** * If the id is null, then use the id of the parent component * appended with a lower-cased simple name of the current impl * class along with a counter suffix to enforce uniqueness */ @Override public void setId(String id) { if (id == null) { IMyIdentityInterface parent = getParent(); if (parent == null) { // this may be the top level component or it may be that // the user called setId() before setParent(..) } else { _id = Helpers.makeIdFromParent(parent,getClass()); } } else { _id = id; } } @Override public IMyIdentityInterface getParent() { return _parent; } @Override public void setParent(IMyIdentityInterface parent) { _parent = parent; } } Every component in the framework will have a parent except for the top level component. Using the setter type of injection, then the setters will have different behavior based on the order of the calling of the setters. In this case, would you agree, that a constructor taking a reference to the parent is better and dropping the parent setter method from the interface entirely? Is it considered bad practice if I wish to be able to configure these components using an IoC container? Chris

    Read the article

  • Strange behavior of move with strings

    - by Umair Ahmed
    I am testing some enhanced string related functions with which I am trying to use move as a way to copy strings around for faster, more efficient use without delving into pointers. While testing a function for making a delimited string from a TStringList, I encountered a strange issue. The compiler referenced the bytes contained through the index when it was empty and when a string was added to it through move, index referenced the characters contained. Here is a small downsized barebone code sample:- unit UI; interface uses System.SysUtils, System.Types, System.UITypes, System.Rtti, System.Classes, System.Variants, FMX.Types, FMX.Controls, FMX.Forms, FMX.Dialogs, FMX.Layouts, FMX.Memo; type TForm1 = class(TForm) Results: TMemo; procedure FormCreate(Sender: TObject); end; var Form1: TForm1; implementation {$R *.fmx} function StringListToDelimitedString ( const AStringList: TStringList; const ADelimiter: String ): String; var Str : String; Temp1 : NativeInt; Temp2 : NativeInt; DelimiterSize : Byte; begin Result := ' '; Temp1 := 0; DelimiterSize := Length ( ADelimiter ) * 2; for Str in AStringList do Temp1 := Temp1 + Length ( Str ); SetLength ( Result, Temp1 ); Temp1 := 1; for Str in AStringList do begin Temp2 := Length ( Str ) * 2; // Here Index references bytes in Result Move ( Str [1], Result [Temp1], Temp2 ); // From here the index seems to address characters instead of bytes in Result Temp1 := Temp1 + Temp2; Move ( ADelimiter [1], Result [Temp1], DelimiterSize ); Temp1 := Temp1 + DelimiterSize; end; end; procedure TForm1.FormCreate(Sender: TObject); var StrList : TStringList; Str : String; begin // Test 1 : StringListToDelimitedString StrList := TStringList.Create; Str := ''; StrList.Add ( 'Hello1' ); StrList.Add ( 'Hello2' ); StrList.Add ( 'Hello3' ); StrList.Add ( 'Hello4' ); Str := StringListToDelimitedString ( StrList, ';' ); Results.Lines.Add ( Str ); StrList.Free; end; end. Please devise a solution and if possible, some explanation. Alternatives are welcome too.

    Read the article

  • IQueryable and lazy loading

    - by Nelson
    I'm having a hard time determining the best way to handle this... With Entity Framework (and L2S), LINQ queries return IQueryable. I have read various opinions on whether the DAL/BLL should return IQueryable, IEnumerable or IList. Assuming we go with IList, then the query is run immediately and that control is not passed on to the next layer. This makes it easier to unit test, etc. You lose the ability to refine the query at higher levels, but you could simply create another method that allows you to refine the query and still return IList. And there are many more pros/cons. So far so good. Now comes Entity Framework and lazy loading. I am using POCO objects with proxies in .NET 4/VS 2010. In the presentation layer I do: foreach (Order order in bll.GetOrders()) { foreach (OrderLine orderLine in order.OrderLines) { // Do something } } In this case, GetOrders() returns IList so it executes immediately before returning to the PL. But in the next foreach, you have lazy loading which executes multiple SQL queries as it gets all the OrderLines. So basically, the PL is running SQL queries "on demand" in the wrong layer. Is there any sensible way to avoid this? I could turn lazy loading off, but then what's the point of having this "feature" that everyone was complaining EF1 didn't have? And I'll admit it is very useful in many scenarios. So I see several options: Somehow remove all associations in the entities and add methods to return them. This goes against the default EF behavior/code generation and makes it harder to do some composite (multiple entity) LINQ queries. It seems like a step backwards. I vote no. If we have lazy loading anyway which makes it hard to unit test, then go all the way and return IQueryable. You'll have more control farther up the layers. I still don't think this is a good option because IQueryable ties you to L2S, L2E, or your own full implementation of IQueryable. Lazy loading may run queries "on demand", but doesn't tie you to any specific interface. I vote no. Turn off lazy loading. You'll have to handle your associations manually. This could be with eager loading's .Include(). I vote yes in some specific cases. Keep IList and lazy loading. I vote yes in many cases, only due to the troubles with the others. Any other options or suggestions? I haven't found an option that really convinces me.

    Read the article

  • Parameters not being passed into template when using the .Net transform classes

    - by Chris F
    I am using the .Net XslCompiledTranform to run some simple XSLT (see below for a simplified example). The example XSLT is meant to do simply show the value of the parameter that is passed in to the template. The output is what I expect it to be (i.e. <result xmlns:p1="http://www.doesnotexist.com"> <valueOfParamA>valueA</valueOfParamA> </result> when I use Saxon 9.0, but when I use XslCompiledTransform (XslTransform) in .net I get <result xmlns:p1="http://www.doesnotexist.com"> <valueOfParamA></valueOfParamA> </result> The problem is that that the parameter value of paramA is not being passed into the template when I use the .Net classes. I completely stumped as to why. when I step through in Visual Studio, the debugger says that the template will be called with paramA='valueA' but when execution switches to the template the value of paramA is blank. Can anyone explain why this is happening? Is this a bug in the MS implementation or (more likely) am I doing something that is forbidden in XSLT? Any help greatly appreciated. This is the XSLT that I am using <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:extfn="http://exslt.org/common" exclude-result-prefixes="extfn" xmlns:p1="http://www.doesnotexist.com"> <!-- Replace msxml with xmlns:extfn="http://exslt.org/common" xmlns:extfn="urn:schemas-microsoft-com:xslt" --> <xsl:output method="xml" indent="yes"/> <xsl:template match="/"> <xsl:variable name="resultTreeFragment"> <p1:foo> </p1:foo> </xsl:variable> <xsl:variable name="nodeset" select="extfn:node-set($resultTreeFragment)"/> <result> <xsl:apply-templates select="$nodeset" mode="AParticularMode"> <xsl:with-param name="paramA" select="'valueA'"/> </xsl:apply-templates> </result> </xsl:template> <xsl:template match="@* | node()" mode="AParticularMode"> <xsl:param name="paramA"/> <valueOfParamA> <xsl:value-of select="$paramA"/> </valueOfParamA> </xsl:template> </xsl:stylesheet>

    Read the article

< Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >