Search Results

Search found 8397 results on 336 pages for 'implementation'.

Page 298/336 | < Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >

  • What's a clean way to break up a DataTable into chunks of a fixed size with Linq?

    - by Michael Haren
    Update: Here's a similar question Suppose I have a DataTable with a few thousand DataRows in it. I'd like to break up the table into chunks of smaller rows for processing. I thought C#3's improved ability to work with data might help. This is the skeleton I have so far: DataTable Table = GetTonsOfData(); // Chunks should be any IEnumerable<Chunk> type var Chunks = ChunkifyTableIntoSmallerChunksSomehow; // ** help here! ** foreach(var Chunk in Chunks) { // Chunk should be any IEnumerable<DataRow> type ProcessChunk(Chunk); } Any suggestions on what should replace ChunkifyTableIntoSmallerChunksSomehow? I'm really interested in how someone would do this with access C#3 tools. If attempting to apply these tools is inappropriate, please explain! Update 3 (revised chunking as I really want tables, not ienumerables; going with an extension method--thanks Jacob): Final implementation: Extension method to handle the chunking: public static class HarenExtensions { public static IEnumerable<DataTable> Chunkify(this DataTable table, int chunkSize) { for (int i = 0; i < table.Rows.Count; i += chunkSize) { DataTable Chunk = table.Clone(); foreach (DataRow Row in table.Select().Skip(i).Take(chunkSize)) { Chunk.ImportRow(Row); } yield return Chunk; } } } Example consumer of that extension method, with sample output from an ad hoc test: class Program { static void Main(string[] args) { DataTable Table = GetTonsOfData(); foreach (DataTable Chunk in Table.Chunkify(100)) { Console.WriteLine("{0} - {1}", Chunk.Rows[0][0], Chunk.Rows[Chunk.Rows.Count - 1][0]); } Console.ReadLine(); } static DataTable GetTonsOfData() { DataTable Table = new DataTable(); Table.Columns.Add(new DataColumn()); for (int i = 0; i < 1000; i++) { DataRow Row = Table.NewRow(); Row[0] = i; Table.Rows.Add(Row); } return Table; } }

    Read the article

  • Get Ajax4JSF (a4j component) running on Glassfish

    - by yournamehere
    I'm trying to build an JEE6-application on Glassfish V3, using JSF 2.0, Weld, JPA2 and Maven. Now i'm having trouble getting a simple <a4j:support> running. This is the fragment of my little example. When typing something into the inputtext, the outputtext should automatically be updated. But nothing happens (not in Firefox, not in IE8). <ui:composition xmlns:a4j="https://ajax4jsf.dev.java.net/ajax" (...)> <h:inputText value="#{personHome.message}"> <a4j:support event="onkeyup" reRender="repeater"/> </h:inputText> <h:outputText id="repeater" value="#{personHome.message}"/> Beside that my example doesn't work, my problem is also that i don't really understand if i need a JSF implementation (MyFaces, Richfaces, Primefaces etc.) or not to use a4j elements. Is it "built-in" in glassfish? Until now, i only have the following dependencies i think i need in for JSF: <dependency> <groupId>com.sun.faces</groupId> <artifactId>jsf-api</artifactId> <version>2.0.2</version> </dependency> <dependency> <groupId>com.sun.faces</groupId> <artifactId>jsf-impl</artifactId> <version>2.0.2</version> </dependency>I'm trying to build an JEE6-application on Glassfish V3, using JSF 2.0, Weld, JPA2 and Maven. <dependency> <groupId>javax</groupId> <artifactId>javaee-api</artifactId> <version>6.0</version> <scope>provided</scope> </dependency> So... what do i have to do to get Ajax4JSF running on a simple JEE-App on Glassfish? Any help is highly appreciated!

    Read the article

  • C++ vs. C++/CLI: Const qualification of virtual function parameters

    - by James McNellis
    [All of the following was tested using Visual Studio 2008 SP1] In C++, const qualification of parameter types does not affect the type of a function (8.3.5/3: "Any cv-qualifier modifying a parameter type is deleted") So, for example, in the following class hierarchy, Derived::Foo overrides Base::Foo: struct Base { virtual void Foo(const int i) { } }; struct Derived : Base { virtual void Foo(int i) { } }; Consider a similar hierarchy in C++/CLI: ref class Base abstract { public: virtual void Foo(const int) = 0; }; ref class Derived : public Base { public: virtual void Foo(int i) override { } }; If I then create an instance of Derived: int main(array<System::String ^> ^args) { Derived^ d = gcnew Derived; } it compiles without errors or warnings. When I run it, it throws the following exception and then terminates: An unhandled exception of type 'System.TypeLoadException' occurred in ClrVirtualTest.exe Additional information: Method 'Foo' in type 'Derived'...does not have an implementation. That exception seems to indicate that the const qualification of the parameter does affect the type of the function in C++/CLI (or, at least it affects overriding in some way). However, if I comment out the line containing the definition of Derived::Foo, the compiler reports the following error (on the line in main where the instance of Derived is instantiated): error C2259: 'Derived': cannot instantiate abstract class If I add the const qualifier to the parameter of Derived::Foo or remove the const qualifier from the parameter of Base::Foo, it compiles and runs with no errors. I would think that if the const qualification of the parameter affects the type of the function, I should get this error if the const qualification of the parameter in the derived class virtual function does not match the const qualification of the parameter in the base class virtual function. If I change the type of Derived::Foo's parameter from an int to a double, I get the following warning (in addition to the aforementioned error, C2259): warning C4490: 'override': incorrect use of override specifier; 'Derived::Foo' does not match a base ref class method So, my question is, effectively, does the const qualification of function parameters affect the type of the function in C++/CLI? If so, why does this compile and why are there no errors or warnings? If not, why is an exception thrown?

    Read the article

  • Java map / nio / NFS issue causing a VM fault: "a fault occurred in a recent unsafe memory access op

    - by Matthew Bloch
    I have written a parser class for a particular binary format (nfdump if anyone is interested) which uses java.nio's MappedByteBuffer to read through files of a few GB each. The binary format is just a series of headers and mostly fixed-size binary records, which are fed out to the called by calling nextRecord(), which pushes on the state machine, returning null when it's done. It performs well. It works on a development machine. On my production host, it can run for a few minutes or hours, but always seems to throw "java.lang.InternalError: a fault occurred in a recent unsafe memory access operation in compiled Java code", fingering one of the Map.getInt, getShort methods, i.e. a read operation in the map. The uncontroversial (?) code that sets up the map is this: /** Set up the map from the given filename and position */ protected void open() throws IOException { // Set up buffer, is this all the flexibility we'll need? channel = new FileInputStream(file).getChannel(); MappedByteBuffer map1 = channel.map(FileChannel.MapMode.READ_ONLY, 0, channel.size()); map1.load(); // we want the whole thing, plus seems to reduce frequency of crashes? map = map1; // assumes the host writing the files is little-endian (x86), ought to be configurable map.order(java.nio.ByteOrder.LITTLE_ENDIAN); map.position(position); } and then I use the various map.get* methods to read shorts, ints, longs and other sequences of bytes, before hitting the end of the file and closing the map. I've never seen the exception thrown on my development host. But the significant point of difference between my production host and development is that on the former, I am reading sequences of these files over NFS (probably 6-8TB eventually, still growing). On my dev machine, I have a smaller selection of these files locally (60GB), but when it blows up on the production host it's usually well before it gets to 60GB of data. Both machines are running java 1.6.0_20-b02, though the production host is running Debian/lenny, the dev host is Ubuntu/karmic. I'm not convinced that will make any difference. Both machines have 16GB RAM, and are running with the same java heap settings. I take the view that if there is a bug in my code, there is enough of a bug in the JVM not to throw me a proper exception! But I think it is just a particular JVM implementation bug due to interactions between NFS and mmap, possibly a recurrence of 6244515 which is officially fixed. I already tried adding in a "load" call to force the MappedByteBuffer to load its contents into RAM - this seemed to delay the error in the one test run I've done, but not prevent it. Or it could be coincidence that was the longest it had gone before crashing! If you've read this far and have done this kind of thing with java.nio before, what would your instinct be? Right now mine is to rewrite it without nio :)

    Read the article

  • CSS3's border-radius property and border-collapse:collapse don't mix. How can I use border-radius to

    - by vamin
    Edit - Original Title: Is there an alternative way to achieve border-collapse:collapse in CSS (in order to have a collapsed, rounded corner table)? Since it turns out that simply getting the table's borders to collapse does not solve the root problem, I have updated the title to better reflect the discussion. I am trying to make a table with rounded corners using the CSS3 border-radius property. The table styles I'm using look something like this: table { -moz-border-radius:10px; -webkit-border-radius:10px; border-radius:10px} Here's the problem. I also want to set the border-collapse:collapse property, and when that is set border-radius no longer works (at least in Firefox)(edit- I thought this might just be a difference in mozilla's implementation, but it turns out this is the way it's supposed to work according to the w3c). Is there a CSS-based way I can get the same effect as border-collapse:collapse without actually using it? Edits: I've made a simple page to demonstrate the problem here (Firefox/Safari only). It seems that a large part of the problem is that setting the table to have rounded corners does not affect the corners of the corner td elements. If the table was all one color, this wouldn't be a problem since I could just make the top and bottom td corners rounded for the first and last row respectively. However, I am using different background colors for the table to differentiate the headings and for striping, so the inner td elements would show their rounded corners as well. Summary of proposed solutions: Surrounding the table with another element with round corners doesn't work because the table's square corners "bleed through." Specifying border width to 0 doesn't collapse the table. Bottom td corners still square after setting cellspacing to zero. Using javascript instead- works by avoiding the problem. Possible solutions: The tables are generated in php, so I could just apply a different class to each of the outer th/tds and style each corner separately. I'd rather not do this, since it's not very elegant and a bit of a pain to apply to multiple tables, so please keep suggestions coming. Possible solution 2 is to use javascript (jQuery, specifically) to style the corners. This solution also works, but still not quite what I'm looking for (I know I'm picky). I have two reservations: 1) this is a very lightweight site, and I'd like to keep javascript to the barest minimum 2) part of the appeal that using border-radius has for me is graceful degradation and progressive enhancement. By using border-radius for all rounded corners, I hope to have a consistently rounded site in CSS3-capable browsers and a consistently square site in others (I'm looking at you, IE). I know that trying to do this with CSS3 today may seem needless, but I have my reasons. I would also like to point out that this problem is a result of the w3c speficication, not poor CSS3 support, so any solution will still be relevant and useful when CSS3 has more widespread support.

    Read the article

  • Fleunt NHibernate not working outside of nunit test fixtures

    - by thorkia
    Okay, here is my problem... I created a Data Layer using the RTM Fluent Nhibernate. My create session code looks like this: _session = Fluently.Configure(). Database(SQLiteConfiguration.Standard.UsingFile("Data.s3db")) .Mappings( m => { m.FluentMappings.AddFromAssemblyOf<ProductMap>(); m.FluentMappings.AddFromAssemblyOf<ProductLogMap>(); }) .ExposeConfiguration(BuildSchema) .BuildSessionFactory(); When I reference the module in a test project, then create a test fixture that looks something like this: [Test] public void CanAddProduct() { var product = new Product {Code = "9", Name = "Test 9"}; IProductRepository repository = new ProductRepository(); repository.AddProduct(product); using (ISession session = OrmHelper.OpenSession()) { var fromDb = session.Get<Product>(product.Id); Assert.IsNotNull(fromDb); Assert.AreNotSame(fromDb, product); Assert.AreEqual(fromDb.Id, product.Id); } My tests pass. When I open up the created SQLite DB, the new Product with Code 9 is in it. the tables for Product and ProductLog are there. Now, when I create a new console application, and reference the same library, do something like this: Product product = new Product() {Code = "10", Name = "Hello"}; IProductRepository repository = new ProductRepository(); repository.AddProduct(product); Console.WriteLine(product.Id); Console.ReadLine(); It doesn't work. I actually get pretty nasty exception chain. To save you lots of head aches, here is the summary: Top Level exception: An invalid or incomplete configuration was used while creating a SessionFactory. Check PotentialReasons collection, and InnerException for more detail.\r\n\r\n The PotentialReasons collection is empty The Inner exception: The IDbCommand and IDbConnection implementation in the assembly System.Data.SQLite could not be found. Ensure that the assembly System.Data.SQLite is located in the application directory or in the Global Assembly Cache. If the assembly is in the GAC, use element in the application configuration file to specify the full name of the assembly. Both the unit test library and the console application reference the exact same version of System.Data.SQLite. Both projects have the exact same DLLs in the debug folder. I even tried copying SQLite DB the unit test library created into the debug directory of the console app, and removed the build schema lines and it still fails If anyone can help me figure out why this won't work outside of my unit tests it would be greatly appreciated. This crazy bug has me at a stand still.

    Read the article

  • Available Coroutine Libraries in Java

    - by JUST MY correct OPINION
    I would like to do some stuff in Java that would be clearer if written using concurrent routines, but for which full-on threads are serious overkill. The answer, of course, is the use of coroutines, but there doesn't appear to be any coroutine support in the standard Java libraries and a quick Google on it brings up tantalising hints here or there, but nothing substantial. Here's what I've found so far: JSIM has a coroutine class, but it looks pretty heavyweight and conflates, seemingly, with threads at points. The point of this is to reduce the complexity of full-on threading, not to add to it. Further I'm not sure that the class can be extracted from the library and used independently. Xalan has a coroutine set class that does coroutine-like stuff, but again it's dubious if this can be meaningfully extracted from the overall library. It also looks like it's implemented as a tightly-controlled form of thread pool, not as actual coroutines. There's a Google Code project which looks like what I'm after, but if anything it looks more heavyweight than using threads would be. I'm basically nervous of something that requires software to dynamically change the JVM bytecode at runtime to do its work. This looks like overkill and like something that will cause more problems than coroutines would solve. Further it looks like it doesn't implement the whole coroutine concept. By my glance-over it gives a yield feature that just returns to the invoker. Proper coroutines allow yields to transfer control to any known coroutine directly. Basically this library, heavyweight and scary as it is, only gives you support for iterators, not fully-general coroutines. The promisingly-named Coroutine for Java fails because it's a platform-specific (obviously using JNI) solution. And that's about all I've found. I know about the native JVM support for coroutines in the Da Vinci Machine and I also know about the JNI continuations trick for doing this. These are not really good solutions for me, however, as I would not necessarily have control over which VM or platform my code would run on. (Indeed any bytecode manipulation system would suffer similar problems -- it would be best were this pure Java if possible. Runtime bytecode manipulation would restrict me from using this on Android, for example.) So does anybody have any pointers? Is this even possible? If not, will it be possible in Java 7? Edited to add: Just to ensure that confusion is contained, this is a related question to my other one, but not the same. This one is looking for an existing implementation in a bid to avoid reinventing the wheel unnecessarily. The other one is a question relating to how one would go about implementing coroutines in Java should this question prove unanswerable. The intent is to keep different questions on different threads.

    Read the article

  • How do you prevent IDisposable from spreading to all your classes?

    - by GrahamS
    Start with these simple classes... Let's say I have a simple set of classes like this: class Bus { Driver busDriver = new Driver(); } class Driver { Shoe[] shoes = { new Shoe(), new Shoe() }; } class Shoe { Shoelace lace = new Shoelace(); } class Shoelace { bool tied = false; } A Bus has a Driver, the Driver has two Shoes, each Shoe has a Shoelace. All very silly. Add an IDisposable object to Shoelace Later I decide that some operation on the Shoelace could be multi-threaded, so I add an EventWaitHandle for the threads to communicate with. So Shoelace now looks like this: class Shoelace { private AutoResetEvent waitHandle = new AutoResetEvent(false); bool tied = false; // ... other stuff .. } Implement IDisposable on Shoelace Buit now FxCop will complain: "Implement IDisposable on 'Shoelace' because it creates members of the following IDisposable types: 'EventWaitHandle'." Okay, I implement IDisposable on Shoelace and my neat little class becomes this horrible mess: class Shoelace : IDisposable { private AutoResetEvent waitHandle = new AutoResetEvent(false); bool tied = false; private bool disposed = false; public void Dispose() { Dispose(true); GC.SuppressFinalize(this); } ~Shoelace() { Dispose(false); } protected virtual void Dispose(bool disposing) { if (!this.disposed) { if (disposing) { if (waitHandle != null) { waitHandle.Close(); waitHandle = null; } } // No unmanaged resources to release otherwise they'd go here. } disposed = true; } } Or (as pointed out by commenters) since Shoelace itself has no unmanaged resources, I might use the simpler dispose implementation without needing the Dispose(bool) and Destructor: class Shoelace : IDisposable { private AutoResetEvent waitHandle = new AutoResetEvent(false); bool tied = false; public void Dispose() { if (waitHandle != null) { waitHandle.Close(); waitHandle = null; } GC.SuppressFinalize(this); } } Watch in horror as IDisposable spreads Right that's that fixed. But now FxCop will complain that Shoe creates a Shoelace, so Shoe must be IDisposable too. And Driver creates Shoe so Driver must be IDisposable. and Bus creates Driver so Bus must be IDisposable and so on. Suddenly my small change to Shoelace is causing me a lot of work and my boss is wondering why I need to checkout Bus to make a change to Shoelace. The Question How do you prevent this spread of IDisposable, but still ensure that your unmanaged objects are properly disposed?

    Read the article

  • Why does Git.pm on cygwin complain about 'Out of memory during "large" request?

    - by Charles Ma
    Hi, I'm getting this error while doing a git svn rebase in cygwin Out of memory during "large" request for 268439552 bytes, total sbrk() is 140652544 bytes at /usr/lib/perl5/site_perl/Git.pm line 898, <GEN1> line 3. 268439552 is 256MB. Cygwin's maxium memory size is set to 1024MB so I'm guessing that it has a different maximum memory size for perl? How can I increase the maximum memory size that perl programs can use? update: This is where the error occurs (in Git.pm): while (1) { my $bytesLeft = $size - $bytesRead; last unless $bytesLeft; my $bytesToRead = $bytesLeft < 1024 ? $bytesLeft : 1024; my $read = read($in, $blob, $bytesToRead, $bytesRead); //line 898 unless (defined($read)) { $self->_close_cat_blob(); throw Error::Simple("in pipe went bad"); } $bytesRead += $read; } I've added a print before line 898 to print out $bytesToRead and $bytesRead and the result was 1024 for $bytesToRead, and 134220800 for $bytesRead, so it's reading 1024 bytes at a time and it has already read 128MB. Perl's 'read' function must be out of memory and is trying to request for double it's memory size...is there a way to specify how much memory to request? or is that implementation dependent? UPDATE2: While testing memory allocation in cygwin: This C program's output was 1536MB int main() { unsigned int bit=0x40000000, sum=0; char *x; while (bit > 4096) { x = malloc(bit); if (x) sum += bit; bit >>= 1; } printf("%08x bytes (%.1fMb)\n", sum, sum/1024.0/1024.0); return 0; } While this perl program crashed if the file size is greater than 384MB (but succeeded if the file size was less). open(F, "<400") or die("can't read\n"); $size = -s "400"; $read = read(F, $s, $size); The error is similar Out of memory during "large" request for 536875008 bytes, total sbrk() is 217088 bytes at mem.pl line 6.

    Read the article

  • How to combine designable components with dependency injection

    - by Wim Coenen
    When creating a designable .NET component, you are required to provide a default constructor. From the IComponent documentation: To be a component, a class must implement the IComponent interface and provide a basic constructor that requires no parameters or a single parameter of type IContainer. This makes it impossible to do dependency injection via constructor arguments. (Extra constructors could be provided, but the designer would ignore them.) Some alternatives we're considering: Service Locator Don't use dependency injection, instead use the service locator pattern to acquire dependencies. This seems to be what IComponent.Site.GetService is for. I guess we could create a reusable ISite implementation (ConfigurableServiceLocator?) which can be configured with the necessary dependencies. But how does this work in a designer context? Dependency Injection via properties Inject dependencies via properties. Provide default instances if they are necessary to show the component in a designer. Document which properties need to be injected. Inject dependencies with an Initialize method This is much like injection via properties but it keeps the list of dependencies that need to be injected in one place. This way the list of required dependencies is documented implicitly, and the compiler will assists you with errors when the list changes. Any idea what the best practice is here? How do you do it? edit: I have removed "(e.g. a WinForms UserControl)" since I intended the question to be about components in general. Components are all about inversion of control (see section 8.3.1 of the UMLv2 specification) so I don't think that "you shouldn't inject any services" is a good answer. edit 2: It took some playing with WPF and the MVVM pattern to finally "get" Mark's answer. I see now that visual controls are indeed a special case. As for using non-visual components on designer surfaces, I think the .NET component model is fundamentally incompatible with dependency injection. It appears to be designed around the service locator pattern instead. Maybe this will start to change with the infrastructure that was added in .NET 4.0 in the System.ComponentModel.Composition namespace.

    Read the article

  • Resize an array of images with OpenCV

    - by amr
    I'm passing an array of images (IplImage**) to an object in C++ using OpenCV. I'm then trying to iterate over that array and resize them all to a fixed size (150x150) I'm doing it this way: for(int i = 0; i< this->numTrainingFaces; i++) { IplImage* frame_copy = cvCreateImage( cvSize(150,150), this->faceImageArray[0]->depth, this->faceImageArray[0]->nChannels ); cout << "Created image" << endl; cvResize(this->faceImageArray[i], frame_copy); cout << "Resized image" << endl; IplImage* grey_image = cvCreateImage( cvSize( frame_copy->width, frame_copy->height ), IPL_DEPTH_8U, 1 ); cout << "Created grey image" << endl; cvCvtColor( frame_copy, grey_image, CV_RGB2GRAY ); cout << "Converted image" << endl; this->faceImageArray[i] = grey_image; cvReleaseImage(&frame_copy); cvReleaseImage(&grey_image); } But I'm getting this output, and I'm not sure why: Created image Resized image Created grey image Converted image Created image OpenCV Error: Assertion failed (src.type() == dst.type()) in cvResize, file /build/buildd/opencv-2.1.0/src/cv/cvimgwarp.cpp, line 3102 terminate called after throwing an instance of 'cv::Exception' what(): /build/buildd/opencv-2.1.0/src/cv/cvimgwarp.cpp:3102: error: (-215) src.type() == dst.type() in function cvResize Aborted I'm basically just trying to replace the image in the array with the resized one in as few steps as possible. Edit: Revised my code as follows: for(int i = 0; i< this->numTrainingFaces; i++) { IplImage* frame_copy = cvCreateImage( cvSize(150,150), this->faceImageArray[i]->depth, this->faceImageArray[i]->nChannels ); cvResize(this->faceImageArray[i], frame_copy); IplImage* grey_image = cvCreateImage( cvSize( frame_copy->width, frame_copy->height ), IPL_DEPTH_8U, 1 ); cvCvtColor( frame_copy, grey_image, CV_RGB2GRAY ); faceImageArray[i] = cvCreateImage( cvSize(grey_image->width, grey_image->height), grey_image->depth, grey_image->nChannels); cvCopy(grey_image,faceImageArray[i]); cvReleaseImage(&frame_copy); cvReleaseImage(&grey_image); } Then later on I'm performing some PCA, and get this output: OpenCV Error: Null pointer (Null pointer to the written object) in cvWrite, file /build/buildd/opencv-2.1.0/src/cxcore/cxpersistence.cpp, line 4740 But I don't think my code has got to the point where I'm explicitly calling cvWrite, so it must be part of the library. I can give a full implementation if necessary - is there anything in my code that's going to create a null pointer?

    Read the article

  • Grouping geographical shapes

    - by grenade
    I am using Dundas Maps and attempting to draw a map of the world where countries are grouped into regions that are specific to a business implementation. I have shape data (points and segments) for each country in the world. I can combine countries into regions by adding all points and segments for countries within a region to a new region shape. foreach(var region in GetAllRegions()){ var regionShape = new Shape { Name = region.Name }; foreach(var country in GetCountriesInRegion(region.Id)){ var countryShape = GetCountryShape(country.Id); regionShape.AddSegments(countryShape.ShapeData.Points, countryShape.ShapeData.Segments); } map.Shapes.Add(regionShape); } The problem is that the country border lines still show up within a region and I want to remove them so that only regional borders show up. Dundas polygons must start and end at the same point. This is the case for all the country shapes. Now I need an algorithm that can: Determine where country borders intersect at a regional border, so that I can join the regional border segments. Determine which country borders are not regional borders so that I can discard them. Sort the resulting regional points so that they sequentialy describe the shape boundaries. Below is where I have gotten to so far with the map. You can see that the country borders still need to be removed. For example, the border between Mongolia and China should be discarded whereas the border between Mongolia and Russia should be retained. The reason I need to retain a regional border is that the region colors will be significant in conveying information but adjacent regions may be the same color. The regions can change to include or exclude countries and this is why the regional shaping must be dynamic. EDIT: I now know that I what I am looking for is a UNION of polygons. David Lean explains how to do it using the spatial functions in SQL Server 2008 which might be an option but my efforts have come to a halt because the resulting polygon union is so complex that SQL truncates it at 43,680 characters. I'm now trying to either find a workaround for that or find a way of doing the union in code.

    Read the article

  • In a DDD approach, is this example modelled correctly?

    - by Tag
    Just created an acc on SO to ask this :) Assuming this simplified example: building a web application to manage projects... The application has the following requirements/rules. 1) Users should be able to create projects inserting the project name. 2) Project names cannot be empty. 3) Two projects can't have the same name. I'm using a 4-layered architecture (User Interface, Application, Domain, Infrastructure). On my Application Layer i have the following ProjectService.cs class: public class ProjectService { private IProjectRepository ProjectRepo { get; set; } public ProjectService(IProjectRepository projectRepo) { ProjectRepo = projectRepo; } public void CreateNewProject(string name) { IList<Project> projects = ProjectRepo.GetProjectsByName(name); if (projects.Count > 0) throw new Exception("Project name already exists."); Project project = new Project(name); ProjectRepo.InsertProject(project); } } On my Domain Layer, i have the Project.cs class and the IProjectRepository.cs interface: public class Project { public int ProjectID { get; private set; } public string Name { get; private set; } public Project(string name) { ValidateName(name); Name = name; } private void ValidateName(string name) { if (name == null || name.Equals(string.Empty)) { throw new Exception("Project name cannot be empty or null."); } } } public interface IProjectRepository { void InsertProject(Project project); IList<Project> GetProjectsByName(string projectName); } On my Infrastructure layer, i have the implementation of IProjectRepository which does the actual querying (the code is irrelevant). I don't like two things about this design: 1) I've read that the repository interfaces should be a part of the domain but the implementations should not. That makes no sense to me since i think the domain shouldn't call the repository methods (persistence ignorance), that should be a responsability of the services in the application layer. (Something tells me i'm terribly wrong.) 2) The process of creating a new project involves two validations (not null and not duplicate). In my design above, those two validations are scattered in two different places making it harder (imho) to see whats going on. So, my question is, from a DDD perspective, is this modelled correctly or would you do it in a different way?

    Read the article

  • idiomatic property changed notification in scala?

    - by Jeremy Bell
    I'm trying to find a cleaner alternative (that is idiomatic to Scala) to the kind of thing you see with data-binding in WPF/silverlight data-binding - that is, implementing INotifyPropertyChanged. First, some background: In .Net WPF or silverlight applications, you have the concept of two-way data-binding (that is, binding the value of some element of the UI to a .net property of the DataContext in such a way that changes to the UI element affect the property, and vise versa. One way to enable this is to implement the INotifyPropertyChanged interface in your DataContext. Unfortunately, this introduces a lot of boilerplate code for any property you add to the "ModelView" type. Here is how it might look in Scala: trait IDrawable extends INotifyPropertyChanged { protected var drawOrder : Int = 0 def DrawOrder : Int = drawOrder def DrawOrder_=(value : Int) { if(drawOrder != value) { drawOrder = value OnPropertyChanged("DrawOrder") } } protected var visible : Boolean = true def Visible : Boolean = visible def Visible_=(value: Boolean) = { if(visible != value) { visible = value OnPropertyChanged("Visible") } } def Mutate() : Unit = { if(Visible) { DrawOrder += 1 // Should trigger the PropertyChanged "Event" of INotifyPropertyChanged trait } } } For the sake of space, let's assume the INotifyPropertyChanged type is a trait that manages a list of callbacks of type (AnyRef, String) = Unit, and that OnPropertyChanged is a method that invokes all those callbacks, passing "this" as the AnyRef, and the passed-in String). This would just be an event in C#. You can immediately see the problem: that's a ton of boilerplate code for just two properties. I've always wanted to write something like this instead: trait IDrawable { val Visible = new ObservableProperty[Boolean]('Visible, true) val DrawOrder = new ObservableProperty[Int]('DrawOrder, 0) def Mutate() : Unit = { if(Visible) { DrawOrder += 1 // Should trigger the PropertyChanged "Event" of ObservableProperty class } } } I know that I can easily write it like this, if ObservableProperty[T] has Value/Value_= methods (this is the method I'm using now): trait IDrawable { // on a side note, is there some way to get a Symbol representing the Visible field // on the following line, instead of hard-coding it in the ObservableProperty // constructor? val Visible = new ObservableProperty[Boolean]('Visible, true) val DrawOrder = new ObservableProperty[Int]('DrawOrder, 0) def Mutate() : Unit = { if(Visible.Value) { DrawOrder.Value += 1 } } } // given this implementation of ObservableProperty[T] in my library // note: IEvent, Event, and EventArgs are classes in my library for // handling lists of callbacks - they work similarly to events in C# class PropertyChangedEventArgs(val PropertyName: Symbol) extends EventArgs("") class ObservableProperty[T](val PropertyName: Symbol, private var value: T) { protected val propertyChanged = new Event[PropertyChangedEventArgs] def PropertyChanged: IEvent[PropertyChangedEventArgs] = propertyChanged def Value = value; def Value_=(value: T) { if(this.value != value) { this.value = value propertyChanged(this, new PropertyChangedEventArgs(PropertyName)) } } } But is there any way to implement the first version using implicits or some other feature/idiom of Scala to make ObservableProperty instances function as if they were regular "properties" in scala, without needing to call the Value methods? The only other thing I can think of is something like this, which is more verbose than either of the above two versions, but is still less verbose than the original: trait IDrawable { private val visible = new ObservableProperty[Boolean]('Visible, false) def Visible = visible.Value def Visible_=(value: Boolean): Unit = { visible.Value = value } private val drawOrder = new ObservableProperty[Int]('DrawOrder, 0) def DrawOrder = drawOrder.Value def DrawOrder_=(value: Int): Unit = { drawOrder.Value = value } def Mutate() : Unit = { if(Visible) { DrawOrder += 1 } } }

    Read the article

  • Adding NavigationControl to a TabBar Application containing UITableViews

    - by kungfuslippers
    Hi, I'm new to iPhone dev and wanted to get advice on the general design pattern / guide for putting a certain kind of app together. I'm trying to build a TabBar type application. One of the tabs needs to display a TableView and selecting a cell from within the table view will do something else - maybe show another table view or a web page. I need a Navigation Bar to be able to take me back from the table view/web page. The approach I've taken so far is to: Create an app based around UITabBarController as the rootcontroller i.e. @interface MyAppDelegate : NSObject <UIApplicationDelegate> { IBOutlet UIWindow *window; IBOutlet UITabBarController *rootController; } Create a load of UIViewController derived classes and associated NIBs and wire everything up in IB so when I run the app I get the basic tabs working. I then take the UIViewController derived class and modify it to the following: @interface MyViewController : UIViewController<UITableViewDataSource, UITableViewDelegate> { } and I add the delegate methods to the implementation of MyViewController - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { return 2; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; } if (indexPath.row == 0) { cell.textLabel.text = @"Mummy"; } else { cell.textLabel.text = @"Daddy"; } return cell; } Go back to IB , open MyViewController.xib and drop a UITableView onto it. Set the Files Owner to be MyViewController and then set the delegate and datasource of the UITableView to be MyViewController. If I run the app now, I get the table view appearing with mummy and daddy working nicely. So far so good. The question is how do I go about incorporating a Navigation Bar into my current code for when I implement: - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath() { // get row selected NSUInteger row = [indexPath row]; if (row == 0) { // Show another table } else if (row == 1) { // Show a web view } } Do I drop a NavigationBar UI control onto MyControllerView.xib ? Should I create it programmatically? Should I be using a UINavigationController somewhere ? I've tried dropping a NavigationBar onto my MyControllerView.xib in IB but its not shown when I run the app, only the TableView is displayed.

    Read the article

  • WTF is wtf? (in WebKit code base)

    - by Motti
    I downloaded Chromium's code base and ran across the WTF namespace. namespace WTF { /* * C++'s idea of a reinterpret_cast lacks sufficient cojones. */ template<typename TO, typename FROM> TO bitwise_cast(FROM in) { COMPILE_ASSERT(sizeof(TO) == sizeof(FROM), WTF_wtf_reinterpret_cast_sizeof_types_is_equal); union { FROM from; TO to; } u; u.from = in; return u.to; } } // namespace WTF Does this mean what I think it means? Could be so, the bitwise_cast implementation specified here will not compile if either TO or FROM is not a POD and is not (AFAIK) more powerful than C++ built in reinterpret_cast. The only point of light I see here is the nobody seems to be using bitwise_cast in the Chromium project. I see there's some legalese so I'll put in the little letters to keep out of trouble. /* * Copyright (C) 2008 Apple Inc. All Rights Reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */

    Read the article

  • Escaping Code for Different Shells

    - by Jon Purdy
    Question: What characters do I need to escape in a user-entered string to securely pass it into shells on Windows and Unix? What shell differences and version differences should be taken into account? Can I use printf "%q" somehow, and is that reliable across shells? Backstory (a.k.a. Shameless Self-Promotion): I made a little DSL, the Vision Web Template Language, which allows the user to create templates for X(HT)ML documents and fragments, then automatically fill them in with content. It's designed to separate template logic from dynamic content generation, in the same way that CSS is used to separate markup from presentation. In order to generate dynamic content, a Vision script must defer to a program written in a language that can handle the generation logic, such as Perl or Python. (Aside: using PHP is also possible, but Vision is intended to solve some of the very problems that PHP perpetuates.) In order to do this, the script makes use of the @system directive, which executes a shell command and expands to its output. (Platform-specific generation can be handled using @unix or @windows, which only expand on the proper platform.) The problem is obvious, I should think: test.htm: <!-- ... --> <form action="login.vis" method="POST"> <input type="text" name="USERNAME"/> <input type="password" name="PASSWORD"/> </form> <!-- ... --> login.vis: #!/usr/bin/vision # Think USERNAME = ";rm -f;" @system './login.pl' { USERNAME; PASSWORD } One way to safeguard against this kind of attack is to set proper permissions on scripts and directories, but Web developers may not always set things up correctly, and the naive developer should get just as much security as the experienced one. The solution, logically, is to include a @quote directive that produces a properly escaped string for the current platform. @system './login.pl' { @quote : USERNAME; @quote : PASSWORD } But what should @quote actually do? It needs to be both cross-platform and secure, and I don't want to create terrible problems with a naive implementation. Any thoughts?

    Read the article

  • What's the standard algorithm for syncing two lists of objects?

    - by Oliver Giesen
    I'm pretty sure this must be in some kind of text book (or more likely in all of them) but I seem to be using the wrong keywords to search for it... :( A common task I'm facing while programming is that I am dealing with lists of objects from different sources which I need to keep in sync somehow. Typically there's some sort of "master list" e.g. returned by some external API and then a list of objects I create myself each of which corresponds to an object in the master list. Sometimes the nature of the external API will not allow me to do a live sync: For instance the external list might not implement notifications about items being added or removed or it might notify me but not give me a reference to the actual item that was added or removed. Furthermore, refreshing the external list might return a completely new set of instances even though they still represent the same information so simply storing references to the external objects might also not always be feasible. Another characteristic of the problem is that both lists cannot be sorted in any meaningful way. You should also assume that initializing new objects in the "slave list" is expensive, i.e. simply clearing and rebuilding it from scratch is not an option. So how would I typically tackle this? What's the name of the algorithm I should google for? In the past I have implemented this in various ways (see below for an example) but it always felt like there should be a cleaner and more efficient way. Here's an example approach: Iterate over the master list Look up each item in the "slave list" Add items that do not yet exist Somehow keep track of items that already exist in both lists (e.g. by tagging them or keeping yet another list) When done iterate once more over the slave list Remove all objects that have not been tagged (see 4.) Update Thanks for all your responses so far! I will need some time to look at the links. Maybe one more thing worthy of note: In many of the situations where I needed this the implementation of the "master list" is completely hidden from me. In the most extreme cases the only access I might have to the master list might be a COM-interface that exposes nothing but GetFirst-, GetNext-style methods. I'm mentioning this because of the suggestions to either sort the list or to subclass it both of which is unfortunately not practical in these cases unless I copy the elements into a list of my own and I don't think that would be very efficient. I also might not have made it clear enough that the elements in the two lists are of different types, i.e. not assignment-compatible: Especially, the elements in the master list might be available as interface references only.

    Read the article

  • How does the rsync algorithm correctly identify repeating blocks?

    - by Kai
    I'm on a personal quest to learn how the rsync algorithm works. After some reading and thinking, I've come up with a situation where I think the algorithm fails. I'm trying to figure out how this is resolved in an actual implementation. Consider this example, where A is the receiver and B is the sender. A = abcde1234512345fghij B = abcde12345fghij As you can see, the only change is that 12345 has been removed. Now, to make this example interesting, let's choose a block size of 5 bytes (chars). Hashing the values on the sender's side using the weak checksum gives the following values list. abcde|12345|fghij abcde -> 495 12345 -> 255 fghij -> 520 values = [495, 255, 520] Next we check to see if any hash values differ in A. If there's a matching block we can skip to the end of that block for the next check. If there's a non-matching block then we've found a difference. I'll step through this process. Hash the first block. Does this hash exist in the values list? abcde -> 495 (yes, so skip) Hash the second block. Does this hash exist in the values list? 12345 -> 255 (yes, so skip) Hash the third block. Does this hash exist in the values list? 12345 -> 255 (yes, so skip) Hash the fourth block. Does this hash exist in the values list? fghij -> 520 (yes, so skip) No more data, we're done. Since every hash was found in the values list, we conclude that A and B are the same. Which, in my humble opinion, isn't true. It seems to me this will happen whenever there is more than one block that share the same hash. What am I missing?

    Read the article

  • Opinion on "loop invariants", and are these frequently used in the industry?

    - by Michael Aaron Safyan
    I was thinking back to my freshman year at college (five years ago) when I took an exam to place-out of intro-level computer science. There was a question about loop invariants, and I was wondering if loop invariants are really necessary in this case or if the question was simply a bad example... the question was to write an iterative definition for a factorial function, and then to prove that the function was correct. The code that I provided for the factorial function was as follows: public static int factorial(int x) { if ( x < 0 ){ throw new IllegalArgumentException("Parameter must be = 0"); }else if ( x == 0 ){ return 1; }else{ int result = 1; for ( int i = 1; i <= x; i++ ){ result*=i; } return result; } } My own proof of correctness was a proof by cases, and in each I asserted that it was correct by definition (x! is undefined for negative values, 0! is 1, and x! is 1*2*3...*x for a positive value of x). The professor wanted me to prove the loop using a loop invariant; however, my argument was that it was correct "by definition", because the definition of "x!" for a positive integer x is "the product of the integers from 1... x", and the for-loop in the else clause is simply a literal translation of this definition. Is a loop invariant really needed as a proof of correctness in this case? How complicated must a loop be before a loop invariant (and proper initialization and termination conditions) become necessary for a proof of correctness? Additionally, I was wondering... how often are such formal proofs used in the industry? I have found that about half of my courses are very theoretical and proof-heavy and about half are very implementation and coding-heavy, without any formal or theoretical material. How much do these overlap in practice? If you do use proofs in the industry, when do you apply them (always, only if it's complicated, rarely, never)?

    Read the article

  • Code to show UIPickerview under clicked UITextField

    - by Chris F
    I thought I'd share a code snippet where I show a UIPickerView when you click a UITextField. The code uses a UIPickerView, but there's no reason to use a different view controller, like a UITableViewController that uses a table instead of a picker. Just create a single-view project with a nib, and add a UITextField to the view and make you connections in IB. // .h file #import @interface MyPickerViewViewController : UIViewController <UIPickerViewDelegate, UIPickerViewDataSource, UITextFieldDelegate> - (IBAction)dismissPickerView:(id)sender; @end // .m file #import "MyPickerViewViewController.h" @interface MyPickerViewViewController () { UIPickerView *_pv; NSArray *_array; IBOutlet __weak UITextField *_tf; BOOL _pickerViewShown; } @end @implementation MyPickerViewViewController - (void)viewDidLoad { [super viewDidLoad]; _pickerViewShown = NO; _array = [NSArray arrayWithObjects:@"One", @"Two", @"Three", @"Four", nil]; _pv = [[UIPickerView alloc] initWithFrame:CGRectZero]; _pv.showsSelectionIndicator = YES; _pv.dataSource = self; _pv.delegate = self; _tf.delegate = self; _tf.inputView = _pv; } - (IBAction)dismissPickerView:(id)sender { [_pv removeFromSuperview]; [_tf.inputView removeFromSuperview]; [_tf resignFirstResponder]; _pickerViewShown = NO; } - (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; // Dispose of any resources that can be recreated. } - (BOOL)textFieldShouldBeginEditing:(UITextField *)textField { if (!_pickerViewShown) { [self setRectForPickerViewRelativeToTextField:textField]; [self.view addSubview:_tf.inputView]; _pickerViewShown = YES; } else { [self dismissPickerView:self]; } return NO; } - (void)setRectForPickerViewRelativeToTextField:(UITextField*)textField { CGFloat xPos = textField.frame.origin.x; CGFloat yPos = textField.frame.origin.y; CGFloat width = textField.frame.size.width; CGFloat height = textField.frame.size.height; CGFloat pvHeight = _pv.frame.size.height; CGRect pvRect = CGRectMake(xPos, yPos+height, width, pvHeight); _pv.frame = pvRect; } - (NSString *)pickerView:(UIPickerView *)pickerView titleForRow:(NSInteger)row forComponent:(NSInteger)component { return [_array objectAtIndex:row]; } - (NSInteger)numberOfComponentsInPickerView:(UIPickerView *)pickerView { return 1; } - (NSInteger)pickerView:(UIPickerView *)pickerView numberOfRowsInComponent:(NSInteger)component { return _array.count; } - (void) pickerView:(UIPickerView *)pickerView didSelectRow:(NSInteger)row inComponent:(NSInteger)component { _tf.text = [_array objectAtIndex:row]; [self dismissPickerView:self]; } @end

    Read the article

  • Executing a .NET Managed Assembly from SQL Server 2008 - Pro's, Con's & Recommendations

    - by RPM1984
    Hi guys, looking for opinions/recommendations/links for the following scenario im currently facing. The Platform: .NET 4.0 Web Application SQL Server 2008 The Task: Overhaul a component of the system that performs (fairly) complex mathematical operations based on a specific user activity, and updates numerous tables in the database. A common user activity might be "Bob" decides to post a forum topic. This results in (the end-solution) needing to look at various factors (about the post he did), then after doing some math based on lookup values/ratios as well as other data in the database, inserting some other data as a result of these operations. The Options: Ok - so here's what im thinking. Although it would be much easier to do this in C# (LINQ-SQL) it doesnt make much sense as the majority of the computations are based on values in the db, and it will get difficult to control/optimize/debug the LINQ over time. Hence, im leaning towards created a managed assembly (C# Class Library) that contains the lookup values (constants) as well as leveraging the math classes in the existing .NET BCL. Basically i'd expose a few methods that can be called by the T-SQL Stored Procedures. This to me has the following advantages: Simplicity of math. Do complex math in .NET vs complex math in T-SQL. No brainer. =) Abstraction of computatations, configurable "lookup" values and business logic from raw T-SQL. T-SQL only needs to care about the data, simplifying the stored procedures and making it easier to maintain. When it needs to do math it delegates off to the managed assembly. So, having said that - ive never done this before (call .NET assmembly from T-SQL), and after some googling the best site i could come up with is here, which is useful but outdated. So - what am i asking? Well, firstly - i need some better references on how to actually do this. "This" being how to call a C# .NET 4 Assembly from within T-SQL Stored Procedures in SQL Server 2008. Secondly, who out there has done this, what problems (if any) did you face? Realize this may be difficult to provide a "correct answer", so ill try to give it to whoever gives me the answer with a combination of good links and a list of pro's/con's/problems with this implementation. Cheers!

    Read the article

  • How to copy DispatcherObject (BitmapSource) into different thread?

    - by Tomáš Kafka
    Hi, I am trying to figure out how can I copy DispatcherObject (in my case BitmapSource) into another thread. Use case: I have a WPF app that needs to show window in a new thread (the app is actually Outlook addin and we need to do this because Outlook has some hooks in the main UI thread and is stealing certain hotkeys that we need to use - 'lost in translation' in interop of Outlook, WPF (which we use for UI), and Winforms (we need to use certain microsoft-provided winforms controls)). With that, I have my implementation of WPFMessageBox, that is configured by setting some static properties - and and one of them is BitmapSource for icon. This is used so that in startup I can set WPFMessageBox.Icon once, and since then, every WPFMessageBox will have the same icon. The problem is that BitmapSource, which is assigned into icon, is a DispatcherObject, and when read, it will throw InvalidOperationException: "The calling thread cannot access this object because a different thread owns it.". How can I clone that BitmapSource into the actual thread? It has Clone() and CloneCurrentValue() methods, which don't work (they throw the same exception as well). It also occured to me to use originalIcon.Dispatcher.Invoke( do the cloning here ) - but the BitmapSource's Dispatcher is null, and still - I'd create a copy on a wrong thread and still couldnt use it on mine. BitmapSource.IsFrozen == true. Any idea on how to copy the BitmapSource into different thread (without completely reconstructing it from an image file in a new thread)? EDIT: So, freezing does not help: In the end I have a BitmapFrame (Window.Icon doesn't take any other kind of ImageSource anyway), and when I assign it as a Window.Icon on a different thread, even if frozen, I get InvalidOperationException: "The calling thread cannot access this object because a different thread owns it." with a following stack trace: WindowsBase.dll!System.Windows.Threading.Dispatcher.VerifyAccess() + 0x4a bytes WindowsBase.dll!System.Windows.Threading.DispatcherObject.VerifyAccess() + 0xc bytes PresentationCore.dll!System.Windows.Media.Imaging.BitmapDecoder.Frames.get() + 0xe bytes PresentationFramework.dll!MS.Internal.AppModel.IconHelper.GetIconHandlesFromBitmapFrame(object callingObj = {WPFControls.WPFMBox.WpfMessageBoxWindow: header}, System.Windows.Media.Imaging.BitmapFrame bf = {System.Windows.Media.Imaging.BitmapFrameDecode}, ref MS.Win32.NativeMethods.IconHandle largeIconHandle = {MS.Win32.NativeMethods.IconHandle}, ref MS.Win32.NativeMethods.IconHandle smallIconHandle = {MS.Win32.NativeMethods.IconHandle}) + 0x3b bytes > PresentationFramework.dll!System.Windows.Window.UpdateIcon() + 0x118 bytes PresentationFramework.dll!System.Windows.Window.SetupInitialState(double requestedTop = NaN, double requestedLeft = NaN, double requestedWidth = 560.0, double requestedHeight = NaN) + 0x8a bytes PresentationFramework.dll!System.Windows.Window.CreateSourceWindowImpl() + 0x19b bytes PresentationFramework.dll!System.Windows.Window.SafeCreateWindow() + 0x29 bytes PresentationFramework.dll!System.Windows.Window.ShowHelper(object booleanBox) + 0x81 bytes PresentationFramework.dll!System.Windows.Window.Show() + 0x48 bytes PresentationFramework.dll!System.Windows.Window.ShowDialog() + 0x29f bytes WPFControls.dll!WPFControls.WPFMBox.WpfMessageBox.ShowDialog(System.Windows.Window owner = {WPFControlsTest.MainWindow}) Line 185 + 0x10 bytes C#

    Read the article

  • EXC_BAD_ACCESS when I change moviePlayer contentURL

    - by Bruno
    Hello, In few words, my application is doing that : 1) My main view (MovieListController) has some video thumbnails and when I tap on one, it displays the moviePlayer (MoviePlayerViewController) : MovieListController.h : @interface MoviePlayerViewController : UIViewController <UITableViewDelegate>{ UIView *viewForMovie; MPMoviePlayerController *player; } @property (nonatomic, retain) IBOutlet UIView *viewForMovie; @property (nonatomic, retain) MPMoviePlayerController *player; - (NSURL *)movieURL; @end MovieListController.m : MoviePlayerViewController *controllerTV = [[MoviePlayerViewController alloc] initWithNibName:@"MoviePlayerViewController" bundle:nil]; controllerTV.delegate = self; controllerTV.modalTransitionStyle = UIModalTransitionStyleFlipHorizontal; [self presentModalViewController: controllerTV animated: YES]; [controllerTV release]; 2) In my moviePlayer, I initialize the video I want to play MoviePlayerViewController.m : @implementation MoviePlayerViewController @synthesize player; @synthesize viewForMovie; - (void)viewDidLoad { NSLog(@"start"); [super viewDidLoad]; self.player = [[MPMoviePlayerController alloc] init]; self.player.view.frame = self.viewForMovie.bounds; self.player.view.autoresizingMask = UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleHeight; [self.viewForMovie addSubview:player.view]; self.player.contentURL = [self movieURL]; } - (void)dealloc { NSLog(@"dealloc TV"); [player release]; [viewForMovie release]; [super dealloc]; } -(NSURL *)movieURL { NSBundle *bundle = [NSBundle mainBundle]; NSString *moviePath = [bundle pathForResource:@"FR_Tribord_Surf camp_100204" ofType:@"mp4"]; if (moviePath) { return [NSURL fileURLWithPath:moviePath]; } else { return nil; } } - It's working good, my movie is display My problem : When I go back to my main view : - (void) returnToMap: (MoviePlayerViewController *) controller { [self dismissModalViewControllerAnimated: YES]; } And I tap in a thumbnail to display again the moviePlayer (MoviePlayerViewController), I get a *Program received signal: “EXC_BAD_ACCESS”.* In my debugger I saw that it's stopping on the thread "main" : // // main.m // MoviePlayer // // Created by Eric Freeman on 3/27/10. // Copyright Apple Inc 2010. All rights reserved. // #import <UIKit/UIKit.h> int main(int argc, char *argv[]) { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; int retVal = UIApplicationMain(argc, argv, nil, nil); //EXC_BAD_ACCESS [pool release]; return retVal; } If I comment self.player.contentURL = [self movieURL]; it's working, but when I let it, iI have this problem. I read that it's due to null pointer or memory problem but I don't understand why it's working the first time and not the second time. I release my object in dealloc method. Thanks for your help ! Bruno.

    Read the article

  • Is it ok to dynamic cast "this" as a return value?

    - by Panayiotis Karabassis
    This is more of a design question. I have a template class, and I want to add extra methods to it depending on the template type. To practice the DRY principle, I have come up with this pattern (definitions intentionally omitted): template <class T> class BaseVector: public boost::array<T, 3> { protected: BaseVector<T>(const T x, const T y, const T z); public: bool operator == (const Vector<T> &other) const; Vector<T> operator + (const Vector<T> &other) const; Vector<T> operator - (const Vector<T> &other) const; Vector<T> &operator += (const Vector<T> &other) { (*this)[0] += other[0]; (*this)[1] += other[1]; (*this)[2] += other[2]; return *dynamic_cast<Vector<T> * const>(this); } } template <class T> class Vector : public BaseVector<T> { public: Vector<T>(const T x, const T y, const T z) : BaseVector<T>(x, y, z) { } }; template <> class Vector<double> : public BaseVector<double> { public: Vector<double>(const double x, const double y, const double z); Vector<double>(const Vector<int> &other); double norm() const; }; I intend BaseVector to be nothing more than an implementation detail. This works, but I am concerned about operator+=. My question is: is the dynamic cast of the this pointer a code smell? Is there a better way to achieve what I am trying to do (avoid code duplication, and unnecessary casts in the user code)? Or am I safe since, the BaseVector constructor is private?

    Read the article

< Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >