Search Results

Search found 42608 results on 1705 pages for 'sharepoint object model'.

Page 185/1705 | < Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >

  • WCF - Return object without serializing?

    - by Mayo
    One of my WCF functions returns an object that has a member variable of a type from another library that is beyond my control. I cannot decorate that library's classes. In fact, I cannot even use DataContractSurrogate because the library's classes have private member variables that are essential to operation (i.e. if I return the object without those private member variables, the public properties throw exceptions). If I say that interoperability for this particular method is not needed (at least until the owners of this library can revise to make their objects serializable), is it possible for me to use WCF to return this object such that it can at least be consumed by a .NET client? How do I go about doing that? Update: I am adding pseudo code below... // My code, I have control [DataContract] public class MyObject { private TheirObject theirObject; [DataMember] public int SomeNumber { get { return theirObject.SomeNumber; } // public property exposed private set { } } } // Their code, I have no control public class TheirObject { private TheirOtherObject theirOtherObject; public int SomeNumber { get { return theirOtherObject.SomeOtherProperty; } set { // ... } } } I've tried adding DataMember to my instance of their object, making it public, using a DataContractSurrogate, and even manually streaming the object. In all cases, I get some error that eventually leads back to their object not being explicitly serializable.

    Read the article

  • SharePoint 2007 - Content deployment and swapping content database

    - by Mel Lota
    Hi all, I'm currently working on a SharePoint 2007 site which is setup to allow clients to author content on a staging server and then this is automatically pushed up to the live environment via content deployment. The content deployment is setup in the 'Content deployment jobs and paths' in central admin. Now the problem I've got is that it seems that historically there have been a mixture of full and incremental deployments done to the live site collection which according to Stefan Goßner's best practices post (http://blogs.technet.com/stefan_gossner/pages/content-deployment-best-practices.aspx) is a bad idea due to the fact that things soon become out of sync. It's gotten to the point where the content deployment has just stopped working and incremental or full deployments are throwing errors in the logs. What I'm thinking is that I probably need to perform a full content deployment to an empty site collection and then somehow switch the new clean site collection with the current live one. I was wondering if anybody has any experience with this and could provide any pointers, I'm currently investigating the feasibility of performing the clean content deployment and then switching the live content database with the new one, however in my tests I've found that as soon as I switch content databases, the incremental deployment still fails. Any help much appreciated. (Note: I did post this on SharePoint Overflow as well, but thought I'd put it on here in case anybody else has any ideas) Cheers

    Read the article

  • Question about Displaying Documents and the CQWP in MOSS 2007

    - by Psycho Bob
    My organization is in the process of converting our intranet over to a SharePoint solution. Part of this intranet will be the movement and organization of all our internal documents. Currently, we have 11 pages of document links, each with its own subheadings. So far I have it set where each document has a custom field called "Page" with a check box list of all the document pages on the intranet site. On each individual page, I have setup a Content Query Web Part that displays the documents that have the corresponding Page value set (i.e. if a document Page value has been checked for "HR" it will appear on the HR page). The goal of this setup is to allow the nontechnical personal who will be responsible for the maintenance of the documents to be able to upload new documents to the documents list and note on which pages they should appear on without having to manually update the pages themselves. The problem that I am having is that I cannot seem to find a good way to sort the documents into their subheadings once they are on the appropriate page. I could create individual check boxes for each page/subheading combination, but this would create a list of approximately 50-75 items. Does anyone have any ideas as to how I could accomplish this, either via CQWP or by different means? Goals/Requirements of Installation Allow Intranet documents to be maintained by nontechnical personnel Display documents on the appropriate pages without user having to edit actual page or web part Denote document page location using user settable document attributes (if possible) Maintain current intranet organization and workflow Use only one document list without subdirectories NOTE: I am aware that this is not the most efficient or elegant way to do things, but these are the requirements I have been given for the project.

    Read the article

  • WSS Search fills 10 GB limit on SBS server 2011

    - by Kactus
    I've got a SBS Server 2011 Standard SP1 that isn't very busy. 2 Users local and 2 remote. We have sharepoint that has maybe a dozen small documents at most. I've just started getting the following two error occur Could not allocate space for object 'dbo.MSSBatchHistory'.'IX_MSSBatchHistory' in database 'WSS_Search_SERVER' because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup. And CREATE DATABASE or ALTER DATABASE failed because the resulting cumulative database size would exceed your licensed limit of 10240 MB per database. Digging around in SQL manager I see that WSS Search DB file size is 10241MB, the log file is only 147 MB Firstly, why is WSS Search taking up so much space? How can I stop it from doing so, and what can I do now to get things running ok. I know about log file truncating and this isn't the case here since the log is tiny. Any help is appreciated. There is plenty of free space on the disk (791GB free) Thanks Kactus

    Read the article

  • C# Select clause returns system exception instead of relevant object

    - by Kashif
    I am trying to use the select clause to pick out an object which matches a specified name field from a database query as follows: objectQuery = from obj in objectList where obj.Equals(objectName) select obj; In the results view of my query, I get: base {System.SystemException} = {"Boolean Equals(System.Object)"} Where I should be expecting something like a Car, Make, or Model Would someone please explain what I am doing wrong here? The method in question can be seen here: // this function searches the database's table for a single object that matches the 'Name' property with 'objectName' public static T Read<T>(string objectName) where T : IEquatable<T> { using (ISession session = NHibernateHelper.OpenSession()) { IQueryable<T> objectList = session.Query<T>(); // pull (query) all the objects from the table in the database int count = objectList.Count(); // return the number of objects in the table // alternative: int count = makeList.Count<T>(); IQueryable<T> objectQuery = null; // create a reference for our queryable list of objects T foundObject = default(T); // create an object reference for our found object if (count > 0) { // give me all objects that have a name that matches 'objectName' and store them in 'objectQuery' objectQuery = from obj in objectList where obj.Equals(objectName) select obj; // make sure that 'objectQuery' has only one object in it try { foundObject = (T)objectQuery.Single(); } catch { return default(T); } // output some information to the console (output screen) Console.WriteLine("Read Make: " + foundObject.ToString()); } // pass the reference of the found object on to whoever asked for it return foundObject; } } Note that I am using the interface "IQuatable<T>" in my method descriptor. An example of the classes I am trying to pull from the database is: public class Make: IEquatable<Make> { public virtual int Id { get; set; } public virtual string Name { get; set; } public virtual IList<Model> Models { get; set; } public Make() { // this public no-argument constructor is required for NHibernate } public Make(string makeName) { this.Name = makeName; } public override string ToString() { return Name; } // Implementation of IEquatable<T> interface public virtual bool Equals(Make make) { if (this.Id == make.Id) { return true; } else { return false; } } // Implementation of IEquatable<T> interface public virtual bool Equals(String name) { if (this.Name.Equals(name)) { return true; } else { return false; } } } And the interface is described simply as: public interface IEquatable<T> { bool Equals(T obj); }

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Using mocks to set up object even if you will not be mocking any behavior or verifying any interaction with it?

    - by smp7d
    When building a unit test, is it appropriate to use a mocking tool to assist you in setting up an object even if you will not be mocking any behavior or verifying any interaction with that object? Here is a simple example in pseudo-code: //an object we actually want to mock Object someMockedObject = Mock(Object.class); EqualityChecker checker = new EqualityChecker(someMockedObject); //an object we are mocking only to avoid figuring out how to instantiate or //tying ourselves to some constructor that may be removed in the future ComplicatedObject someObjectThatIsHardToInstantiate = Mock(ComplicatedObject.class); //set the expectation on the mock When(someMockedObject).equals(someObjectThatIsHardToInstantiate).return(false); Assert(equalityChecker.check(someObjectThatIsHardToInstantiate)).isFalse(); //verify that the mock was interacted with properly Verify(someMockedObject).equals(someObjectThatIsHardToInstantiate).oneTime(); Is it appropriate to mock ComplicatedObject in this scenario?

    Read the article

  • BindAttribute, Exclude nested properties for complex types

    - by David Board
    I have a 'Stream' model: public class Stream { public int ID { get; set; } [Required] [StringLength(50, ErrorMessage = "Stream name cannot be longer than 50 characters.")] public string Name { get; set; } [Required] [DataType(DataType.Url)] public string URL { get; set; } [Required] [Display(Name="Service")] public int ServiceID { get; set; } public virtual Service Service { get; set; } public virtual ICollection<Event> Events { get; set; } public virtual ICollection<Monitor> Monitors { get; set; } public virtual ICollection<AlertRule> AlertRules { get; set; } } For the 'create' view for this model, I have made a view model to pass some additional information to the view: public class StreamCreateVM { public Stream Stream { get; set; } public SelectList ServicesList { get; set; } public int SelectedService { get; set; } } Here is my create post action: [HttpPost] [ValidateAntiForgeryToken] public ActionResult Create([Bind(Include="Stream, Stream.Name, Stream.ServiceID, SelectedService")] StreamCreateVM viewModel) { if (ModelState.IsValid) { db.Streams.Add(viewModel.Stream); db.SaveChanges(); return RedirectToAction("Index", "Service", new { id = viewModel.Stream.ServiceID }); } return View(viewModel); } Now, this all works, apart from the [Bind(Include="Stream, Stream.Name, Stream.ServiceID, SelectedService")] bit. I can't seem to Include or Exclude properties within a nested object.

    Read the article

  • Where should 'CreateMap' statements go?

    - by jonathanconway
    I frequently use AutoMapper to map Model (Domain) objects to ViewModel objects, which are then consumed by my Views, in a Model/View/View-Model pattern. This involves many 'Mapper.CreateMap' statements, which all must be executed, but must only be executed once in the lifecycle of the application. Technically, then, I should keep them all in a static method somewhere, which gets called from my Application_Start() method (this is an ASP.NET MVC application). However, it seems wrong to group a lot of different mapping concerns together in one central location. Especially when mapping code gets complex and involves formatting and other logic. Is there a better way to organize the mapping code so that it's kept close to the ViewModel that it concerns? (I came up with one idea - having a 'CreateMappings' method on each ViewModel, and in the BaseViewModel, calling this method on instantiation. However, since the method should only be called once in the application lifecycle, it needs some additional logic to cache a list of ViewModel types for which the CreateMappings method has been called, and then only call it when necessary, for ViewModels that aren't in that list.)

    Read the article

  • Why does my data not pass into my view correctly?

    - by dmanexe
    I have a model, view and controller not interacting correctly, and I do not know where the error lies. First, the controller. According to the Code Igniter documentation, I'm passing variables correctly here. function view() { $html_head = array( 'title' => 'Estimate Management' ); $estimates = $this->Estimatemodel->get_estimates(); $this->load->view('html_head', $html_head); $this->load->view('estimates/view', $estimates); $this->load->view('html_foot'); } The model (short and sweet): function get_estimates() { $query = $this->db->get('estimates')->result(); return $query; } And finally the view, just to print the data for initial development purposes: <? print_r($estimates); ?> Now it's undefined when I navigate to this page. However, I know that $query is defined, because it works when I run the model code directly in the view.

    Read the article

  • How to maintain an ordered table with Core Data (or SQL) with insertions/deletions?

    - by Jean-Denis Muys
    This question is in the context of Core Data, but if I am not mistaken, it applies equally well to a more general SQL case. I want to maintain an ordered table using Core Data, with the possibility for the user to: reorder rows insert new lines anywhere delete any existing line What's the best data model to do that? I can see two ways: 1) Model it as an array: I add an int position property to my entity 2) Model it as a linked list: I add two one-to-one relations, next and previous from my entity to itself 1) makes it easy to sort, but painful to insert or delete as you then have to update the position of all objects that come after 2) makes it easy to insert or delete, but very difficult to sort. In fact, I don't think I know how to express a Sort Descriptor (SQL ORDER BY clause) for that case. Now I can imagine a variation on 1): 3) add an int ordering property to the entity, but instead of having it count one-by-one, have it count 100 by 100 (for example). Then inserting is as simple as finding any number between the ordering of the previous and next existing objects. The expensive renumbering only has to occur when the 100 holes have been filled. Making that property a float rather than an int makes it even better: it's almost always possible to find a new float midway between two floats. Am I on the right track with solution 3), or is there something smarter?

    Read the article

  • How to nest joins with CakePHP?

    - by Daren Thomas
    I'm trying to behave. So, instead of using following SQL syntax: select * from tableA INNER JOIN tableB on tableA.id = tableB.tableA_id LEFT OUTER JOIN ( tableC INNER JOIN tableD on tableC.tableD_id = tableD.id) on tableC.tableA_id = tableA.id I'd like to use the CakePHP model->find(). This will let me use the Paginator too, since that will not work with custom SQL queries as far as I understand (unless you hardcode one single pagination query to the model which seems a little inflexible to me). What I've tried so far: /* inside tableA_controller.php, inside an action, e.g. "view" */ $this->paginate['recursive'] = -1; # suppress model associations for now $this->paginate['joins'] = array( array( 'table' => 'tableB', 'alias' => 'TableB', 'type' => 'inner', 'conditions' => 'TableB.tableA_id = TableA.id', ), array( 'table' => 'tableC', 'alias' => 'TableC', 'type' => 'left', 'conditions' => 'TableC.tableA_id = TableA.id', 'joins' = array( # this would be the obvious way to do it, but doesn't work array( 'table' => 'tableD', 'alias' => 'TableD', 'type' => 'inner', 'conditions' => 'TableC.tableD_id = TableD.id' ) ) ) ) That is, nesting the joins into the structure. But that doesn't work (CakePHP just ignores the nested 'joins' element which was kind of what I expected, but sad. I have seen hints in comments on how to do subqueries (in the where clause) using a statement builder. Can a similar trick be used here?

    Read the article

  • MVC 2 - Name Attributes on HTML Input Field when using Parent/Child Entities

    - by Click Ahead
    Hi All, I'm pretty new to MVC 2 using the Entity Framework. I have two tables Company {ID int identity PK,Name nvarchar} and User {ID int identity PK,UserName nvarchar,CompanyID int FK}. A Foreign Key exists between User and Company. I generated my ADO.NET Entity Data Model, a Controller and a view to insert a record. My HTML form has the fields Company and UserName and the idea is when I click save a Company and User is inserted into the database. Sounds straight forward right! My question is as follows: I created a strongly-typed view derived from my 'User' entity. I'm using the the html helper Html.TextBoxFor(model = model.Organisation.Name) but the html name attribute for this input field is 'Organisation.Name'. My problem with this is that the dot throws up all sorts of issues in JQuery, which sees this as a property. If I want to change the name I read that I can use DataAnnotations but because I used the Entity Designer this involves using Buddy Classes. Seems like a bit of overkill just to change the html name attribute on this input field. Am I approaching this the right way or am I missing something here? Thanks for the help !

    Read the article

  • How do I get started designing and implementing a script interface for my .NET application?

    - by Peter Mortensen
    How do I get started designing and implementing a script interface for my .NET application? There is VSTA (the .NET equivalent of VBA for COM), but as far as I understand I would have to pay a license fee for every installation of my application. It is an open source application so this will not work. There is also e.g. the embedding of interpreters (IronPython?), but I don't understand how this would allow exposing an "object model" (see below) to external (or internal) scripts. What is the scripting interface story in .NET? Is it somehow trivial in .NET to do this? Background: I have once designed and implemented a fairly involved script interface for a Macintosh application for acquisition and analysis of data from a mass spectrometer (Mac OS, System 7) and later a COM interface for a Windows application. Both were designed with an "object model" and classes (that can have properties). These are overloaded words, but in a scripting interface context object model is essentially a containment hiarchy of objects of specific classes. Classes have properties and lists of contained objects. E.g. like in the COM interfaces exposed in Microsoft Office applications, where the application object can be used to add to its list of documents (with the side effect of creating the GUI representation of a document). External scripts can create new objects in a container and navigate through the content of the hiarchy at any given time. In the Macintosh case scripts could be written in e.g. AppleScript or Frontier. On the Macintosh the implementation of a scripting interface was very complicated. Support for it in Metroworks' C++ class library (the name escapes me right now) made it much simpler.

    Read the article

  • Object validator - is this good design?

    - by neo2862
    I'm working on a project where the API methods I write have to return different "views" of domain objects, like this: namespace View.Product { public class SearchResult : View { public string Name { get; set; } public decimal Price { get; set; } } public class Profile : View { public string Name { get; set; } public decimal Price { get; set; } [UseValidationRuleset("FreeText")] public string Description { get; set; } [SuppressValidation] public string Comment { get; set; } } } These are also the arguments of setter methods in the API which have to be validated before storing them in the DB. I wrote an object validator that lets the user define validation rulesets in an XML file and checks if an object conforms to those rules: [Validatable] public class View { [SuppressValidation] public ValidationError[] ValidationErrors { get { return Validator.Validate(this); } } } public static class Validator { private static Dictionary<string, Ruleset> Rulesets; static Validator() { // read rulesets from xml } public static ValidationError[] Validate(object obj) { // check if obj is decorated with ValidatableAttribute // if not, return an empty array (successful validation) // iterate over the properties of obj // - if the property is decorated with SuppressValidationAttribute, // continue // - if it is decorated with UseValidationRulesetAttribute, // use the ruleset specified to call // Validate(object value, string rulesetName, string FieldName) // - otherwise, get the name of the property using reflection and // use that as the ruleset name } private static List<ValidationError> Validate(object obj, string fieldName, string rulesetName) { // check if the ruleset exists, if not, throw exception // call the ruleset's Validate method and return the results } } public class Ruleset { public Type Type { get; set; } public Rule[] Rules { get; set; } public List<ValidationError> Validate(object property, string propertyName) { // check if property is of type Type // if not, throw exception // iterate over the Rules and call their Validate methods // return a list of their return values } } public abstract class Rule { public Type Type { get; protected set; } public abstract ValidationError Validate(object value, string propertyName); } public class StringRegexRule : Rule { public string Regex { get; set; } public StringRegexRule() { Type = typeof(string); } public override ValidationError Validate(object value, string propertyName) { // see if Regex matches value and return // null or a ValidationError } } Phew... Thanks for reading all of this. I've already implemented it and it works nicely, and I'm planning to extend it to validate the contents of IEnumerable fields and other fields that are Validatable. What I'm particularly concerned about is that if no ruleset is specified, the validator tries to use the name of the property as the ruleset name. (If you don't want that behavior, you can use [SuppressValidation].) This makes the code much less cluttered (no need to use [UseValidationRuleset("something")] on every single property) but it somehow doesn't feel right. I can't decide if it's awful or awesome. What do you think? Any suggestions on the other parts of this design are welcome too. I'm not very experienced and I'm grateful for any help. Also, is "Validatable" a good name? To me, it sounds pretty weird but I'm not a native English speaker.

    Read the article

  • How to validate properties across Models without repeating the validation logic

    - by Mano
    Hello All, I am building a ASP.NET Mvc app. I have a Data model say User public class user { public int userId {get; private set}; public string FirstName {get; set;} } The validation to be done is that the firstname cannot exceed 50 characters. I have another presentation model in which i have the property FirstName too. I do not want to repeat the validation logic in both the models. I want to have it in one place and that should be it. I can do it in a simpler way by adding a function which can be called while setting the property like private string firstName; public string FirstName { get { return firstName; } set { if (PropertyValidator.ValidName(value)) // assuming ValidName exists and it will throw an exception if the value is not valid { firstName = value; } } } But I am looking for something much simpler so that I do not need to add this for every property I need to have it validated. I looked at ValidationAttribute but then again I can validate this only from a controller (ModelState.IsValid). Since this model could be used by some other type of apps like console app, I could not choose that. But if there is a way to use the Mvc's ModelState.IsValid from outside of a controller, that would be awesome. Any suggestions are greatly appreciated. Thanks!!

    Read the article

  • ASP.NET MVC3 ValueProvider drops string input to a double property

    - by Daniel Koverman
    I'm attempting to validate the input of a text box which corresponds to a property of type double in my model. If the user inputs "foo" I want to know about it so I can display an error. However, the ValueProvider is dropping the value silently (no errors are added to the ModelState). In a normal submission, I fill in "2" for the text box corresponding to myDouble and submit the form. Inspecting controllerContext.HttpContext.Request.Form shows that myDouble=2, among other correct inputs. bindingContext.ValueProvider.GetValue("myDouble") == 2, as expected. The bindingContext.ModelState.Count == 6 and bindingContext.ModelState["myDouble"].Errors.Count == 0. Everything is good and the model binds as expected. Then I fill in "foo" for the text box corresponding to myDouble and submitted the form. Inspecting controllerContext.HttpContext.Request.Form shows that myDouble=foo, which is what I expected. However, bindingContext.ValueProvider.GetValue("myDouble") == null and bindingContext.ModelState.Count == 5 (The exact number isn't important, but it's one less than before). Looking at the ValueProvider, is as if myDouble was never submitted and the model binding occurs as if it wasn't. This makes it difficult to differentiate between a bad input and no input. Is this the expected behavior of ValueProvider? Is there a way to get ValueProvider to report when conversion fails without implementing a custom ValueProvider? Thanks!

    Read the article

  • codeigniter and JSON

    - by ole
    Hello all i having a problem that it only get 1 value in my database and its my title and i want to show content and username from the same table to. here is my JSON kode <script type="text/javascript"> $.getJSON( 'ajax/forumThreads', function(data) { alert(data[0].overskrift); alert(data[0].indhold); } ); </script> my controller <?php class ajax extends Controller { function forumThreads() { $this->load->model('ajax_model'); $data['forum_list'] = $this->ajax_model->forumList(); if ($data['forum_list'] !== false) { echo json_encode($data['forum_list']); } } } my model fle <?php class ajax_model extends Model { function forumList() { $this->db->select('overskrift', 'indhold', 'brugernavn', 'dato'); $this->db->order_by('id', 'desc'); $this->db->limit(5); $forum_list = $this->db->get('forum_traad'); if($forum_list->num_rows() > 0) { return $forum_list->result_array(); } else { return false; } } }

    Read the article

  • How Can I Bypass the X-Frame-Options: SAMEORIGIN HTTP Header?

    - by Daniel Coffman
    I am developing a web page that needs to display, in an iframe, a report served by another company's SharePoint server. They are fine with this. The page we're trying to render in the iframe is giving us X-Frame-Options: SAMEORIGIN which causes the browser (at least IE8) to refuse to render the content in a frame. First, is this something they can control or is it something SharePoint just does by default? If I ask them to turn this off, could they even do it? Second, can I do something to tell the browser to ignore this http header and just render the frame?

    Read the article

  • WCF, Timer Jobs, Web Service which is better ???

    - by kannan.ambadi
    I am working with a Web application, based on Asp.Net 3.5 and WSS 3.0 platform. Recenlty i've got a task as follows. Import bank statement using FTX - Desktop application and parse those statements into database in every 24 hours ie. i need to download bank statement with the help of a desktop application(which i can call by batch file). Then i have to go through each statement(text file) and convert those data into our database for future reference. As far as i know, .Net provides the following options to implement such a functionality. SharePoint Timer Jobs Web Services WCF Windows Services I would like to go for SharePoint Timer Jobs, but there are some plans to move whole application to Asp.net platform. I am interested with WCF since i haven't much experience with WCF applications, but not in a position to take final decision :) Which is the most suitable way for this kind of task? Please suggest.

    Read the article

  • using Silverlight in SCORM content

    - by Jason
    I'm building a LMS system using Sharepoint (WSS 3.0) with the Sharepoint Learning Kit (SLK). One of the requirements is to be able to host Silverlight content within the SCORM package. Has anyone done this before? I haven't been able to find much (anything) online that talks about how to do this. Most of the content tools that exist for SCORM are able to handle Flash, but I haven't come across anything that will do Silverlight. If all else fails, I'll try to manually build a SCORM package, but I'd really like to find some examples or howtos of doing this with Silverlight first. Has anyone done this before?

    Read the article

  • Large file upload into WSS v3

    - by Rubens Farias
    I'd built an WSSv3 application which upload files in small chunks; when every data piece arrives, I temporarly keep it into a SQL 2005 image data type field for performance reasons**. Problem come when upload ends; I need to move data from my SQL Server to Sharepoint Document Library through WSSv3 object model. Right now, I can think two approaches: SPFileCollection.Add(string, (byte[])reader[0]); // OutOfMemoryException and SPFile file = folder.Files.Add("filename", new byte[]{ }); using(Stream stream = file.OpenBinaryStream()) { // ... init vars and stuff ... while ((bytes = reader.GetBytes(0, offset, buffer, 0, BUFFER_SIZE)) 0) { stream.Write(buffer, 0, (int)bytes); // Timeout issues } file.SaveBinary(stream); } Are there any other way to complete successfully this task? ** Performance reasons: if you tries to write every chunk directly at Sharepoint, you'll note a performance degradation as file grows up (100Mb).

    Read the article

  • Building webparts with Visual Studio 2010 Express

    - by MalphasWats
    Hi, I'm trying to get started with building my own webparts, planning to follow this MSDN article. I've downloaded Visual C# 2010 Express - I'm not quite at the point where I feel comfortable dropping 1000 big ones yet, and I installed Visual Web Developer 2010 Express via the WPInstaller. Following through the tutorial, aside from the fact that I don't get the option to create a "Web Control Library", a gap I filled with this article, I can't seem to find the sn.exe tool (or the "Visual Studio 2005 Command Prompt"!). I know it's not quite a direct programming related question, but I can't even get the thing going yet! Any help is appreciated. Thanks EDIT:- I think I may be jumping the gun quite considerably, I wrote a simple hello world example and tried to build it but it doesn't have any references to the Microsoft.SharePoint packages and they don't appear in my lists. Am I understanding some more research I've done (namely this) correctly, in that I have to actually have a full installation of actual SharePoint on the machine I'm developing on?

    Read the article

  • Which is the better approach in custom pages ?

    - by Mina Samy
    Hi all I want to create a custom new item page for sharepoint but there are two approached that I can use and I want to share your experience in determining which is better. The first: is to create a page in a library then create a C# library project to handle the events of the controls on the page. The second: is to define a feature of the content type of my list and specify the new item form to be my custom form, then create a website containing the custom form and put this site at the layouts folder. for me the first approach is fine but the problem is that a user may access the default sharepoint new item form which I don't want to happen. but I don't like the idea of placing the form in a library on the site. so which is better in your opinion ? thanks

    Read the article

  • Is it possible to connect slots of a model object to the GUI in QT4 -Designer?

    - by Robert0
    So I try to build a Model-View window using QTDesigner and C++. For that reason I created a QOBject derived class as my model. It provides slots and signals to access it like: setFileName(QString) or fileNameChanged(QString). I got a little into using signal drag and drop in QTDesigner and found it quite VA-Smalltalk-Like nice. After a while I was wondering if I could also connect my model to this. So is it possible to somehow introduce my model object into the Window/GUI and let QTDesigner connect signals and slots from the model object to the GUI. In essence: Write for me: connect( model, SIGNAL(fileNameChanged(QString)), ui->labelFn, SLOT(setText(QString))) connect( ui-textEdit2, SIGNAL(textChanged(QString)), model, SLOT(setFileName(QString))) Thanks for explaining

    Read the article

< Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >