Search Results

Search found 34232 results on 1370 pages for 'sharepoint list'.

Page 86/1370 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • How to troubleshoot "The server tag is not well formed. " error on sharepoint?

    - by David Lay
    I'm trying to edit a legacy wss3 sharepoint site. Messing around with a 700+ code lines aspx page I got a "The server tag is not well formed." error on sharepoint and The ?content=1 trick does not work. Anyone has a tip on how to get to the line that's causing the problem? I'm expecting something like the aspnet ysod, at least that's usefull. If it's worth something, I have access to the actual server.

    Read the article

  • How to get "Friends who like/play games" list (similar to Farmville 2)

    - by FBAppDev
    I'm working on a Facebook app, and I'm trying to get a friends list similar to Farmville 2. They have a "friends who like games" list. My first thought is, I can get a list of all friends, then for each friend, see if they like any pages with type == "GAMES/TOYS". But ideally, I would like to get the list in one query (not by making one graph API or FQL request per friend). Is this possible, and if so, how?

    Read the article

  • Using `<List>` when dealing with pointers in C#.

    - by Gorchestopher H
    How can I add an item to a list if that item is essentially a pointer and avoid changing every item in my list to the newest instance of that item? Here's what I mean: I am doing image processing, and there is a chance that I will need to deal with images that come in faster than I can process (for a short period of time). After this "burst" of images I will rely on the fact that I can process faster than the average image rate, and will "catch-up" eventually. So, what I want to do is put my images into a <List> when I acquire them, then if my processing thread isn't busy, I can take an image from that list and hand it over. My issue is that I am worried that since I am adding the image "Image1" to the list, then filling "Image1" with a new image (during the next image acquisition) I will be replacing the image stored in the list with the new image as well (as the image variable is actually just a pointer). So, my code looks a little like this: while (!exitcondition) { if(ImageAvailabe()) { Image1 = AcquireImage(); ImgList.Add(Image1); } if(ImgList.Count 0) { ProcessEngine.NewImage(ImgList[0]); ImgList.RemoveAt(0); } } Given the above, how can I ensure that: - I don't replace all items in the list every time Image1 is modified. - I don't need to pre-declare a number of images in order to do this kind of processing. - I don't create a memory devouring monster. Any advice is greatly appreciated.

    Read the article

  • Install Domain Controller – Part1 of build my own development SharePoint2010 Farm

    - by ybbest
    As the memory become really cheap now, a couple days ago I have updated my laptop memory to 12g. Plus I got my old desktop ,now I decide to build my own SharePoint farm at home. I decide to document the steps to build a simple SharePoint farm. I will use windows server 2008 r2 and VMware. In the first part of this series of building my own SharePoint farm. I will create my domain controller. Here are the steps to install it: Open the command line by going to run and type CMD and then type dcpromo in the command line. The AD Installation wizard will prompt and click next. 2. Click next as shown in the screenshot.   3. Select creates a new domain in a new forest and click next.      4. Type a domain name (e.g. ybbest.com) and click next. 5.In my case , I select Windows Server 2008 R2 forest Functional level and click next 6. Leave the default and click next.(If you have not make a static IP address , you need to do so now)      7.You might get scary prompt like the screenshot below , just ignore the message and click Yes.     8.Leave the default settings and click Next  9.Type a password when you need to restore your Domain        10.Click Next and restart your computer ,this will install your Domain Controller.

    Read the article

  • How to reference jQuery in SharePoint2010

    - by ybbest
    In normal asp.net development, in order to add jQuery to your solution you need to add the following script to your Master page. <script language=”javascript” type=”text/javascript” src=”Scripts/jquery-1.4.1.min.js”></script> There are not many differences in referencing jQuery in SharePoint2010; in fact you got quite a few ways to achieve this. The first thing you need to do is to deploy jQuery using SharePoint module template in Visual studio. Then you can choose one of the following ways of referencing jQuery. 1. Using a Delegate Control 2. in the master Page 3. Ad hoc (e.g. in a site page or web part) 4. Using a Custom Action (Can be used as Sandbox solution, you can find example here.) References: jquery How to bootstrap JQuery on every SharePoint page, even in the Sandbox Referencing Javascript Files with SharePoint 2010 Custom Actions using SciptSrc

    Read the article

  • Building Publishing Pages in Code

    - by David Jacobus
    Originally posted on: http://geekswithblogs.net/djacobus/archive/2013/10/27/154478.aspxOne of the Mantras we developers try to follow: Ensure that the solution package we deliver to the client is complete.  We build Web Parts, Master Pages, Images, CSS files and other artifacts that we push to the client with a WSP (Solution Package) And then we have them finish the solution by building their site pages by adding the web parts to the site pages.       I am a proponent that we,  the developers,  should minimize this time consuming work and build these site pages in code.  I found a few blogs and some MSDN documentation but not really a complete solution that has all these artifacts working in one solution.   What I am will discuss and provide a solution for is a package that has: 1.  Master Page 2.  Page Layout 3.  Page Web Parts 4.  Site Pages   Most all done in code without the development team or the developers having to finish up the site building process spending a few hours or days completing the site!  I am not implying that in Development we do this. In fact,  we build these pages incrementally testing our web parts, etc. I am saying that the final action in our solution is that we take all these artifacts and add them to the site pages in code, the client then only needs to activate a few features and VIOLA their site appears!.  I had a project that had me build 8 pages like this as part of the solution.   In this blog post, I am taking a master page solution that I have called DJGreenMaster.  On My Office 365 Development Site it looks like this:     It is a generic master page for a SharePoint 2010 site Along with a three column layout.  Centered with a footer that uses a SharePoint List and Web Part for the footer links.  I use this master page a lot in my site development!  Easy to change the color and site logo with a little CSS.   I am going to add a few web parts for discussion purposes and then add these web parts to a site page in code.    Lets look at the solution package for DJ Green Master as that will be the basis project for building the site pages:   What you are seeing  is a complete solution to add a Master Page to a site collection which contains: 1.  Master Page Module which contains the Master Page and Page Layout 2.  The Footer Module to add the Footer Web Part 3.  Miscellaneous modules to add images, JQuery, CSS and subsite page 4.  3 features and two feature event receivers: a.  DJGreenCSS, used to add the master page CSS file to Style Sheet Library and an Event Receiver to check it in. b.  DJGreenMaster used to add the Master Page and Page Layout.  In an Event Receiver change the master page to DJGreenMaster , create the footer list and check the files in. c.  DJGreenMasterWebParts add the Footer Web Part to the site collection. I won’t go over the code for this as I will give it to you at the end of this blog post. I have discussed creating a list in code in a previous post.  So what we have is the basis to begin what is germane to this discussion.  I have the first two requirements completed.  I need now to add page web parts and the build the pages in code.  For the page web parts, I will use one downloaded from Codeplex which does not use a SharePoint custom list for simplicity:   Weather Web Part and another downloaded from MSDN which is a SharePoint Custom Calendar Web Part, I had to add some functionality to make the events color coded to exceed the built-in 10 overlays using JQuery!    Here is the solution with the added projects:     Here is a screen shot of the Weather Web Part Deployed:   Here is a screen shot of the Site Calendar with JQuery:     Okay, Now we get to the final item:  To create Publishing pages.   We need to add a feature receiver to the DJGreenMaster project I will name it DJSitePages and also add a Event Receiver:       We will build the page at the site collection level and all of the code necessary will be contained in the event receiver.   Added a reference to the Microsoft.SharePoint.Publishing.dll contained in the ISAPI folder of the 14 Hive.   First we will add some static methods from which we will call  in our Event Receiver:   1: private static void checkOut(string pagename, PublishingPage p) 2: { 3: if (p.Name.Equals(pagename, StringComparison.InvariantCultureIgnoreCase)) 4: { 5: 6: if (p.ListItem.File.CheckOutType == SPFile.SPCheckOutType.None) 7: { 8: p.CheckOut(); 9: } 10:   11: if (p.ListItem.File.CheckOutType == SPFile.SPCheckOutType.Online) 12: { 13: p.CheckIn("initial"); 14: p.CheckOut(); 15: } 16: } 17: } 18: private static void checkin(PublishingPage p,PublishingWeb pw) 19: { 20: SPFile publishFile = p.ListItem.File; 21:   22: if (publishFile.CheckOutType != SPFile.SPCheckOutType.None) 23: { 24:   25: publishFile.CheckIn( 26:   27: "CheckedIn"); 28:   29: publishFile.Publish( 30:   31: "published"); 32: } 33: // In case of content approval, approve the file need to add 34: //pulishing site 35: if (pw.PagesList.EnableModeration) 36: { 37: publishFile.Approve("Initial"); 38: } 39: publishFile.Update(); 40: }   In a Publishing Site, CheckIn and CheckOut  are required when dealing with pages in a publishing site.  Okay lets look at the Feature Activated Event Receiver: 1: public override void FeatureActivated(SPFeatureReceiverProperties properties) 2: { 3:   4:   5:   6: object oParent = properties.Feature.Parent; 7:   8:   9:   10: if (properties.Feature.Parent is SPWeb) 11: { 12:   13: currentWeb = (SPWeb)oParent; 14:   15: currentSite = currentWeb.Site; 16:   17: } 18:   19: else 20: { 21:   22: currentSite = (SPSite)oParent; 23:   24: currentWeb = currentSite.RootWeb; 25:   26: } 27: 28:   29: //create the publishing pages 30: CreatePublishingPage(currentWeb, "Home.aspx", "ThreeColumnLayout.aspx","Home"); 31: //CreatePublishingPage(currentWeb, "Dummy.aspx", "ThreeColumnLayout.aspx","Dummy"); 32: }     Basically we are calling the method Create Publishing Page with parameters:  Current Web, Name of the Page, The Page Layout, Title of the page.  Let’s look at the Create Publishing Page method:   1:   2: private void CreatePublishingPage(SPWeb site, string pageName, string pageLayoutName, string title) 3: { 4: PublishingSite pubSiteCollection = new PublishingSite(site.Site); 5: PublishingWeb pubSite = null; 6: if (pubSiteCollection != null) 7: { 8: // Assign an object to the pubSite variable 9: if (PublishingWeb.IsPublishingWeb(site)) 10: { 11: pubSite = PublishingWeb.GetPublishingWeb(site); 12: } 13: } 14: // Search for the page layout for creating the new page 15: PageLayout currentPageLayout = FindPageLayout(pubSiteCollection, pageLayoutName); 16: // Check or the Page Layout could be found in the collection 17: // if not (== null, return because the page has to be based on 18: // an excisting Page Layout 19: if (currentPageLayout == null) 20: { 21: return; 22: } 23:   24: 25: PublishingPageCollection pages = pubSite.GetPublishingPages(); 26: foreach (PublishingPage p in pages) 27: { 28: //The page allready exists 29: if ((p.Name == pageName)) return; 30:   31: } 32: 33:   34:   35: PublishingPage newPage = pages.Add(pageName, currentPageLayout); 36: newPage.Description = pageName.Replace(".aspx", ""); 37: // Here you can set some properties like: 38: newPage.IncludeInCurrentNavigation = true; 39: newPage.IncludeInGlobalNavigation = true; 40: newPage.Title = title; 41: 42: 43:   44:   45: 46:   47: //build the page 48:   49: 50: switch (pageName) 51: { 52: case "Homer.aspx": 53: checkOut("Courier.aspx", newPage); 54: BuildHomePage(site, newPage); 55: break; 56:   57:   58: default: 59: break; 60: } 61: // newPage.Update(); 62: //Now we can checkin the newly created page to the “pages” library 63: checkin(newPage, pubSite); 64: 65: 66: }     The narrative in what is going on here is: 1.  We need to find out if we are dealing with a Publishing Web.  2.  Get the Page Layout 3.  Create the Page in the pages list. 4.  Based on the page name we build that page.  (Here is where we can add all the methods to build multiple pages.) In the switch we call Build Home Page where all the work is done to add the web parts.  Prior to adding the web parts we need to add references to the two web part projects in the solution. using WeatherWebPart.WeatherWebPart; using CSSharePointCustomCalendar.CustomCalendarWebPart;   We can then reference them in the Build Home Page method.   Let’s look at Build Home Page: 1:   2: private static void BuildHomePage(SPWeb web, PublishingPage pubPage) 3: { 4: // build the pages 5: // Get the web part manager for each page and do the same code as below (copy and paste, change to the web parts for the page) 6: // Part Description 7: SPLimitedWebPartManager mgr = web.GetLimitedWebPartManager(web.Url + "/Pages/Home.aspx", System.Web.UI.WebControls.WebParts.PersonalizationScope.Shared); 8: WeatherWebPart.WeatherWebPart.WeatherWebPart wwp = new WeatherWebPart.WeatherWebPart.WeatherWebPart() { ChromeType = PartChromeType.None, Title = "Todays Weather", AreaCode = "2504627" }; 9: //Dictionary<string, string> wwpDic= new Dictionary<string, string>(); 10: //wwpDic.Add("AreaCode", "2504627"); 11: //setWebPartProperties(wwp, "WeatherWebPart", wwpDic); 12:   13: // Add the web part to a pagelayout Web Part Zone 14: mgr.AddWebPart(wwp, "g_685594D193AA4BBFABEF2FB0C8A6C1DD", 1); 15:   16: CSSharePointCustomCalendar.CustomCalendarWebPart.CustomCalendarWebPart cwp = new CustomCalendarWebPart() { ChromeType = PartChromeType.None, Title = "Corporate Calendar", listName="CorporateCalendar" }; 17:   18: mgr.AddWebPart(cwp, "g_20CBAA1DF45949CDA5D351350462E4C6", 1); 19:   20:   21: pubPage.Update(); 22:   23: } Here is what we are doing: 1.  We got  a reference to the SharePoint Limited Web Part Manager and linked/referenced Home.aspx  2.  Instantiated the a new Weather Web Part and used the Manager to add it to the page in a web part zone identified by ID,  thus the need for a Page Layout where the developer knows the ID’s. 3.  Instantiated the Calendar Web Part and used the Manager to add it to the page. 4. We the called the Publishing Page update method. 5.  Lastly, the Create Publishing Page method checks in the page just created.   Here is a screen shot of the page right after a deploy!       Okay!  I know we could make a home page look much better!  However, I built this whole Integrated solution in less than a day with the caveat that the Green Master was already built!  So what am I saying?  Build you web parts, master pages, etc.  At the very end of the engagement build the pages.  The client will be very happy!  Here is the code for this solution Code

    Read the article

  • Speaking at Sinergija12

    - by DigiMortal
    Next week I will be speaker at Sinergija12, the biggest Microsoft conference held in Serbia. The first time I visited Sinergija it was clear to me that this is the event where I should go back. Why? Because technical level of sessions was very well in place and actually sessions I visited were pretty hardcore. Now, two years later, I will be back there but this time I’m there as speaker. My session at Sinergija12 Here are my three almost finished sessions for Sinergija12. ASP.NET MVC 4 Overview Session focuses on new features of ASP.NET MVC 4 and gives the audience good overview about what is coming. Demos cover all important new features - agent based output, new application templates, Web API and Single Page Applications. This session is for everybody who plans to move to ASP.NET MVC 4 or who plans to start building modern web sites.   Building SharePoint Online applications using Napa Office 365 Next version of Office365 allows you to build SharePoint applications using browser based IDE hosted in cloud. This session introduces new tools and shows through practical examples how to build online applications for SharePoint 2013.   Cloud-enabling ASP.NET MVC applications Cloud era is here and over next years more and more web applications will be hosted on cloud environments. Also some of our current web applications will be moved to cloud. This session shows to audience how to change the architecture of ASP.NET web application so it runs on shared hosting and Windows Azure with same code base. Also the audience will see how to debug and deploy web applications to Windows Azure. All developers who are coming to Sinergija12 are welcome to my sessions. See you there! :)

    Read the article

  • permission denied: /etc/apt/sources.list

    - by Eli
    I'm trying to install java jre, i usually do it like this sudo echo 'deb http://www.duinsoft.nl/pkg debs all' >> /etc/apt/sources.list sudo apt-key adv --keyserver keys.gnupg.net --recv-keys 5CB26B26 sudo apt-get update sudo apt-get install update-sun-jre exit but when i do sudo echo 'deb http://www.duinsoft.nl/pkg debs all' >> /etc/apt/sources.list i see permission denied: /etc/apt/sources.list When i do ls -l /etc/apt/sources.list i see -rw-r--r-- 1 root root 3360 Aug 26 01:45 /etc/apt/sources.list When i do sudo mv /etc/apt/sources.list /etc/apt/sources.list.old sudo cat /etc/apt/sources.list.old | sudo tee /etc/apt/sources.list i see #deb cdrom:[Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)]/ dists/precise/main/binary-i386/ #deb cdrom:[Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)]/ dists/precise/restricted/binary-i386/ #deb cdrom:[Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)]/ precise main restricted # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://lb.archive.ubuntu.com/ubuntu/ precise main restricted deb-src http://lb.archive.ubuntu.com/ubuntu/ precise main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://lb.archive.ubuntu.com/ubuntu/ precise-updates main restricted deb-src http://lb.archive.ubuntu.com/ubuntu/ precise-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://lb.archive.ubuntu.com/ubuntu/ precise universe deb-src http://lb.archive.ubuntu.com/ubuntu/ precise universe deb http://lb.archive.ubuntu.com/ubuntu/ precise-updates universe deb-src http://lb.archive.ubuntu.com/ubuntu/ precise-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://lb.archive.ubuntu.com/ubuntu/ precise multiverse deb-src http://lb.archive.ubuntu.com/ubuntu/ precise multiverse deb http://lb.archive.ubuntu.com/ubuntu/ precise-updates multiverse deb-src http://lb.archive.ubuntu.com/ubuntu/ precise-updates multiverse ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://lb.archive.ubuntu.com/ubuntu/ precise-backports main restricted universe multiverse deb-src http://lb.archive.ubuntu.com/ubuntu/ precise-backports main restricted universe multiverse deb http://security.ubuntu.com/ubuntu precise-security main restricted deb-src http://security.ubuntu.com/ubuntu precise-security main restricted deb http://security.ubuntu.com/ubuntu precise-security universe deb-src http://security.ubuntu.com/ubuntu precise-security universe deb http://security.ubuntu.com/ubuntu precise-security multiverse deb-src http://security.ubuntu.com/ubuntu precise-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. # deb http://archive.canonical.com/ubuntu precise partner # deb-src http://archive.canonical.com/ubuntu precise partner ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. deb http://extras.ubuntu.com/ubuntu precise main deb-src http://extras.ubuntu.com/ubuntu precise main and the issue is not solved, i still see that permission error, I'm on a 64 bit laptop

    Read the article

  • How do I allow an email discussion list in MOSS to collect messages from any email sender

    - by glenatron
    I have a Sharepoint discussion list that belongs to an Exchange list with the idea that it will be able to archive discussions on that list and make them generally accessible, searcheable and so on. The problem is that although I have checked the "Accept e-mail messages from any sender" option on the discussion board, it still appears to only be seeing emails from members of the domain, nothing sent to the list from outside gets picked up by the Sharepoint site. Any suggestions as to what else I have to do?

    Read the article

  • Are python list comprehensions always a good programming practice?

    - by dln385
    To make the question clear, I'll use a specific example. I have a list of college courses, and each course has a few fields (all of which are strings). The user gives me a string of search terms, and I return a list of courses that match all of the search terms. This can be done in a single list comprehension or a few nested for loops. Here's the implementation. First, the Course class: class Course: def __init__(self, date, title, instructor, ID, description, instructorDescription, *args): self.date = date self.title = title self.instructor = instructor self.ID = ID self.description = description self.instructorDescription = instructorDescription self.misc = args Every field is a string, except misc, which is a list of strings. Here's the search as a single list comprehension. courses is the list of courses, and query is the string of search terms, for example "history project". def searchCourses(courses, query): terms = query.lower().strip().split() return tuple(course for course in courses if all( term in course.date.lower() or term in course.title.lower() or term in course.instructor.lower() or term in course.ID.lower() or term in course.description.lower() or term in course.instructorDescription.lower() or any(term in item.lower() for item in course.misc) for term in terms)) You'll notice that a complex list comprehension is difficult to read. I implemented the same logic as nested for loops, and created this alternative: def searchCourses2(courses, query): terms = query.lower().strip().split() results = [] for course in courses: for term in terms: if (term in course.date.lower() or term in course.title.lower() or term in course.instructor.lower() or term in course.ID.lower() or term in course.description.lower() or term in course.instructorDescription.lower()): break for item in course.misc: if term in item.lower(): break else: continue break else: continue results.append(course) return tuple(results) That logic can be hard to follow too. I have verified that both methods return the correct results. Both methods are nearly equivalent in speed, except in some cases. I ran some tests with timeit, and found that the former is three times faster when the user searches for multiple uncommon terms, while the latter is three times faster when the user searches for multiple common terms. Still, this is not a big enough difference to make me worry. So my question is this: which is better? Are list comprehensions always the way to go, or should complicated statements be handled with nested for loops? Or is there a better solution altogether?

    Read the article

  • Which parts of Sharepoint do I need to understand to build a publicly facing website?

    - by Petras
    I am building a publicly facing website that does the following. Users log in. And then view a list of their customers. They click on a customer to view their past purchases, order them, change them etc. This is not a shopping site by the way. It is a simple look up tool. Note that none of the data accessed by the website is in anything other than a SQL database - no office documents. Also, the login does not use users Windows credentials on a VPN or something like that. Typically I would build this using a standard ASP.NET MVC website. However the client says they want to use Sharepoint. As I understand it, Sharepoint is used for workflow and websites that are collaboration tools such as the components you can see here http://www.sharepointhosting.com/sharepoint-features.html Here are my questions: Would I be right in saying that WSS is completely inappropriate for this task as it comes with an overhead that provides no benefits? If I had to use it, would I need WSS or MOSS? If I had to use it, would I be right in saying the site would consist of : List item a) Web Parts b) And a custom site layout. How do I create one of these?

    Read the article

  • Is using SharePoint as a intranet/extranet portal a good idea?

    - by Rob
    I work for a fortune 500 company in IT and we have developed many systems/applications to do a variety of things. We are in need of some commonality of these applications and a better portal/dashboard/landing page for these applications. So, our customers and employees would log into this portal and see all the "things" that they can do which then link to their own application. This could maybe just iframe in each application inside of this portal to keep brand and navigation consistency. We are trying to decide whether to use SharePoint 2007 or 2010 for this or develop a portal/dashboard of sorts in house. We would like this portal to look and feel very branded to our needs and really not even feel like its using SharePoint (if needed). An example is to provide our own Menu control that drives the navigation if needed. Does anyone have any pros/cons for using SharePoint in such a way? Any advice on implementation (e.g. use 2010, much easier to customize design than 2007, etc)?

    Read the article

  • Cisco ASA 5505 allowing inbound ICMPv6

    - by Astron
    I am trying to allow inbound unsolicited ICMPv6 requests from an external link-local address to my outside (external) interfaces link-local address. I can ping (echo-request) the external address and receive a pong (echo-reply) but ICMPv6 messages initiated on the far side are dropped. I am running 9.0(1) in order to use some of the newer features. Does the Cisco ASA not allow unsolicited inbound requests from a link-local address? Should it matter if all ICMPv6 is allowed? Statements being denied: %ASA-3-313008: Denied IPv6-ICMP type=129, code=0 from fe80::XXXX:XXXX:XXXX:XXXX on interface outside %ASA-3-313008: Denied IPv6-ICMP type=131, code=0 from fe80::XXXX:XXXX:XXXX:XXXX on interface outside %ASA-3-313008: Denied IPv6-ICMP type=131, code=0 from fe80::XXXX:XXXX:XXXX:XXXX on interface outside %ASA-3-313008: Denied IPv6-ICMP type=136, code=0 from fe80::XXXX:XXXX:XXXX:XXXX on interface outside %ASA-3-313008: Denied IPv6-ICMP type=136, code=0 from fe80::XXXX:XXXX:XXXX:XXXX on interface outside %ASA-3-313008: Denied IPv6-ICMP type=136, code=0 from fe80::XXXX:XXXX:XXXX:XXXX on interface outside I created both an inbound ACL and ICMP permit statements: access-list OUTSIDE-IN extended permit icmp6 any any access-list OUTSIDE-IN extended permit icmp6 any any membership-report access-list OUTSIDE-IN extended permit icmp6 any any membership-report 0 access-list OUTSIDE-IN extended permit icmp6 any any echo-reply 0 access-list OUTSIDE-IN extended permit icmp6 any any echo-reply access-list OUTSIDE-IN extended permit icmp6 any interface outside membership-report access-list OUTSIDE-IN extended permit icmp6 any interface outside membership-report 0 access-list OUTSIDE-IN extended permit icmp6 any6 any6 echo-reply access-list OUTSIDE-IN extended permit icmp6 any6 any6 membership-report access-list OUTSIDE-IN extended permit icmp6 any6 any6 echo-reply 0 access-list OUTSIDE-IN extended permit icmp6 any6 any6 membership-report 0 snip access-group OUTSIDE-IN in interface outside ipv6 icmp permit any inside ipv6 icmp permit any membership-report outside ipv6 icmp permit any echo-reply outside ipv6 icmp permit any router-advertisement outside ipv6 icmp permit any neighbor-solicitation outside ipv6 icmp permit any neighbor-advertisement outside ipv6 icmp permit any outside

    Read the article

  • Can't move or access WSS Central Administration site

    - by Jim
    We have several WSS Servers: WSS1 WSS2 WSS3 WSS4 SharePoint thinks that Central Administration is on WSS3 and that it can be access via SSL on port 22641. The problem is that central administration is not there. It was removed using the config wizard. We removed central admin from all servers to clean everything out, and we tried installing Central Admin on WSS1. The alternate access mappings still point to central admin on WSS3. We tried deleting the alternate access mappings, but SharePoint won't let you delete central admin's mapping. Later, we removed central admin from all of our servers and tried creating the Central Admin website on WSS3, where SharePoint already thinks it is. But for some reason SharePoint is creating the alternate access mappings using SSL, and we don't have a certificate for the server. Why is SharePoint creating alternate access mappings routing an https internal URL by default? How can we move central administration to a new server? We are using WSS 3.0.

    Read the article

  • How to use Nintex Reusable Workflow Template

    - by ybbest
    If you like to re-use your workflow logic over more than one list or library, you can create reusable workflow template. Here are the steps 1. Go to site settings and create reusable workflow template. 2. Select the content type you like the template to bound to and give a workflow a title. 3.Create your workflow the same way as you did for a list workflow and publish your workfow. 4. Finally, you need add your workflow to the list you like to run your workflow. 5. Go to workflow settings and add a Workflow. 6. Select the content type and configure the workflow as below 7. After you done this, your workflow will run as usual. Note: 1. You cannot conditionally start your workflow. 2. Your workflow is not automatically bound to the list when you add the content type to the list, you need to configure it manually as shown in step 4-6.

    Read the article

  • MaxTotalSizeInBytes - Blind spots in Usage file and Web Analytics Reports

    - by Gino Abraham
    Originally posted on: http://geekswithblogs.net/GinoAbraham/archive/2013/10/28/maxtotalsizeinbytes---blind-spots-in-usage-file-and-web-analytics.aspx http://blogs.msdn.com/b/sharepoint_strategery/archive/2012/04/16/usage-file-and-web-analytics-reports-with-blind-spots.aspx In my previous post (Troubleshooting SharePoint 2010 Web Analytics), I referenced a problem that can occur when exceeding the daily partition size for the LoggingDB, which generates the ULS message “[Partition] has exceeded the max bytes”. Below, I wanted to provide some additional info on this particular issue and help identify some options if this occurs. As an aside, this post only applies if you are missing portions of Usage data - think blind spots on intermittent days or user activity regularly sparse for the afternoon/evening. If this fits your scenario - read on. But if Usage logs are outright missing, go check out my Troubleshooting post first.  Background on the problem:The LoggingDB database has a default maximum size of ~6GB. However, SharePoint evenly splits this total size into fixed sized logical partitions – and the number of partitions is defined by the number of days to retain Usage data (by default 14 days). In this case, 14 partitions would be created to account for the 14 days of retention. If the retention were halved to 7 days, the LoggingDBwould be split into 7 corresponding partitions at twice the size. In other words, the partition size is generally defined as [max size for DB] / [number of retention days].Going back to the default scenario, the “max size” for the LoggingDB is 6200000000 bytes (~6GB) and the retention period is 14 days. Using our formula, this would be [~6GB] / [14 days], which equates to 444858368 bytes (~425MB) per partition per day. Again, if the retention were halved to 7 days (which halves the number of partitions), the resulting partition size becomes [~6GB] / [7 days], or ~850MB per partition.From my experience, when the partition size for any given day is exceeded, the usage logging for the remainder of the day is essentially thrown away because SharePoint won’t allow any more to be written to that day’s partition. The only clue that this is occurring (beyond truncated usage data) is an error such as the following that gets reported in the ULS:04/08/2012 09:30:04.78    OWSTIMER.EXE (0x1E24)    0x2C98    SharePoint Foundation    Health    i0m6     High    Table RequestUsage_Partition12 has 444858368 bytes that has exceeded the max bytes 444858368It’s also worth noting that the exact bytes reported (e.g. ‘444858368’ above) may slightly vary among farms. For example, you may instead see 445226812, 439123456, or something else in the ballpark. The exact number itself doesn't matter, but this error message intends to indicates that the reporting usage has exceeded the partition size for the given day.What it means:The error itself is easy to miss, which can lead to substantial gaps in the reporting data (your mileage may vary) if not identified. At this point, I can only advise to periodically check the ULS logs for this message. Down the road, I plan to explore if [Developing a Custom Health Rule] could be leveraged to identify the issue (If you've ever built Custom Health Rules, I'd be interested to hear about your experiences). Overcoming this issue also poses a challenge, with workaround options including:Lower the retentionBecause the partition size is generally defined as [max size] / [number of retention days], the first option is to lower the number of days to retain the data – the lower the retention, the lower the divisor and thus a bigger partition. For example, halving the retention from 14 to 7 days would halve the number of partitions, but double the partition size to ~850MB (e.g. [6200000000 bytes] / [7 days] = ~850GB partitions). Lowering it to 2 days would result in two ~3GB partitions… and so on.Recreate the LoggingDB with an increased sizeThe property MaxTotalSizeInBytes is exposed by OM code for the SPUsageDefinition object and can be updated with the example PowerShell snippet below. However, updating this value has no immediate impact because this size only applies when creating a LoggingDB. Therefore, you must create a newLoggingDB for the Usage Service Application. The gotcha: this effectively deletes all prior Usage databecause the Usage Service Application can only have a single LoggingDB.Here is an example snippet to update the "Page Requests" Usage Definition:$def=Get-SPUsageDefinition -Identity "page requests" $def.MaxTotalSizeInBytes=12400000000 $def.update()Create a new Logging database and attach to the Usage Service Application using the following command: Get-spusageapplication | Set-SPUsageApplication -DatabaseServer <dbServer> -DatabaseName <newDBname> Updated (5/10/2012): Once the new database has been created, you can confirm the setting has truly taken by running the following SQL Query (be sure to replace the database name in the following query with the name provided in the PowerShell above)SELECT * FROM [WSS_UsageApplication].[dbo].[Configuration] WITH (nolock) WHERE ConfigName LIKE 'Max Total Bytes - RequestUsage'

    Read the article

  • How to clear the resent server name list in SQL Server Management Studio

    - by Pavan Kumar Pabothu
    If you are using SQL Server management Studio much the we can observer that the list of server names in the log in of it. As you can imagin a period of time after 6 month or 1 year you will see a long list of server names in the login dialog. How to clear this list...? I doesn't provide a mechanism to clean nor clear the list, so you'll have to do a little browsing through your file system. For SQl Server 2005 Management Studio, we should delete the below file C:\Documents and Settings\<user>\Application Data\Microsoft\Microsoft SQL Server\90\Tools\Shell\mru.dat. For SQl Server 2008 Management Studio, we should delete the below file C:\Documents and Settings\<user>\Application Data\Microsoft\Microsoft SQL Server\90\Tools\Shell\SQLStudio.bin. After deletion we can re-login the Management studio and can see the empty list.

    Read the article

  • List view pages in google index

    - by plantify
    We have a large database of items which are viewable as individual items (they are plants) (url example http://www.plantify.co.uk/Abelia-chinensis/plant-5087) or in a list view (url example http://www.plantify.co.uk/page-1/plant). There is a link on the individual page to the list view. We want to index in google for the term Abelia chinensis. My question revolves around the list view and its impact on SEO. Should we prevent google from indexing the list view? Should we put a no follow on the link to the list view to prevent us from losing link 'juice' to a page that is really on for navigation/

    Read the article

  • Howto fix "[Errno 13] Permission denied" in mailman mailing lists

    - by Michael
    After migrating domains from one plesk server onto another, I got several of those mails every day: (the target mailbox does not exist, so I get those as undeliverable mail bounces) Return-Path: <[email protected]> Received: (qmail 26460 invoked by uid 38); 26 May 2012 12:00:02 +0200 Date: 26 May 2012 12:00:02 +0200 Message-ID: <20120526100002.xyzxx.qmail@lvpsxxx-xx-xx-xx.dedicated.hosteurope.de> From: [email protected] (Cron Daemon) To: [email protected] Subject: Cron <list@lvpsxxx-xx-xx-xx> [ -x /usr/lib/mailman/cron/senddigests ] && /usr/lib/mailman/cron/senddigests Content-Type: text/plain; charset=ANSI_X3.4-1968 X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/var/list> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=list> List: xyzxyz: problem processing /var/lib/mailman/lists/xyzxyz/digest.mbox: [Errno 13] Permission denied: '/var/lib/mailman/archives/private/xyzxyz' I tried to fix the permissions myself, but the problem still exists.

    Read the article

  • How-to populate different select list content per table row

    - by frank.nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A frequent requirement posted on the OTN forum is to render cells of a table column using instances of af:selectOneChoices with each af:selectOneChoice instance showing different list values. To implement this use case, the select list of the table column is populated dynamically from a managed bean for each row. The table's current rendered row object is accessible in the managed bean using the #{row} expression, where "row" is the value added to the table's var property. <af:table var="row">   ...   <af:column ...>     <af:selectOneChoice ...>         <f:selectItems value="#{browseBean.items}"/>     </af:selectOneChoice>   </af:column </af:table> The browseBean managed bean referenced in the code snippet above has a setItems and getItems method defined that is accessible from EL using the #{browseBean.items} expression. When the table renders, then the var property variable - the #{row} reference - is filled with the data object displayed in the current rendered table row. The managed bean getItems method returns a List<SelectItem>, which is the model format expected by the f:selectItems tag to populate the af:selectOneChoice list. public void setItems(ArrayList<SelectItem> items) {} //this method is executed for each table row public ArrayList<SelectItem> getItems() {   FacesContext fctx = FacesContext.getCurrentInstance();   ELContext elctx = fctx.getELContext();   ExpressionFactory efactory =          fctx.getApplication().getExpressionFactory();          ValueExpression ve =          efactory.createValueExpression(elctx, "#{row}", Object.class);      Row rw = (Row) ve.getValue(elctx);         //use one of the row attributes to determine which list to query and   //show in the current af:selectOneChoice list  // ...  ArrayList<SelectItem> alsi = new ArrayList<SelectItem>();  for( ... ){      SelectItem item = new SelectItem();        item.setLabel(...);        item.setValue(...);        alsi.add(item);   }   return alsi;} For better performance, the ADF Faces table stamps it data rows. Stamping means that the cell renderer component - af:selectOneChoice in this example - is instantiated once for the column and then repeatedly used to display the cell data for individual table rows. This however means that you cannot refresh a single select one choice component in a table to change its list values. Instead the whole table needs to be refreshed, rerunning the managed bean list query. Be aware that having individual list values per table row is an expensive operation that should be used only on small tables for Business Services with low latency data fetching (e.g. ADF Business Components and EJB) and with server side caching strategies for the queried data (e.g. storing queried list data in a managed bean in session scope).

    Read the article

  • Calling functions from different classes

    - by A Ron Hubbard Clevenger
    I'm writing a program and I'm supposed to check and see if a certain object is in the list before I call it. I set up the contains() method which is supposed to use the equals() method of the Comparable interface I implemented on my Golfer class but it doesn't seem to call it (I put print statements in to check). I can't seem to figure out whats wrong with the code, the ArrayUnsortedList class I'm using to go through the list even uses the correct toString() method I defined in my Golfer class but for some reason it won't use the equals() method I implemented. //From "GolfApp.java" public class GolfApp{ ListInterface <Golfer>golfers = new ArraySortedList<Golfer> (20); Golfer golfer; //..*snip*.. if(this.golfers.contains(new Golfer(name,score))) System.out.println("The list already contains this golfer"); else{ this.golfers.add(this.golfer = new Golfer(name,score)); System.out.println("This golfer is already on the list"); } //From "ArrayUnsortedList.java" protected void find(T target){ location = 0; found = false; while (location < numElements){ if (list[location].equals(target)) //Where I think the problem is { found = true; return; } else location++; } } public boolean contains(T element){ find(element); return found; } //From "Golfer.java" public class Golfer implements Comparable<Golfer>{ //..irrelavant code sniped..// public boolean equals(Golfer golfer) { String thisString = score + ":" + name; String otherString = golfer.getScore() + ":" + golfer.getName() ; System.out.println("Golfer.equals() has bee called"); return thisString.equalsIgnoreCase(otherString); } public String toString() { return (score + ":" + name); } My main problem seems to be getting the find function of the ArrayUnsortedList to call my equals function in the find() part of the List but I'm not exactly sure why, like I said when I have it printed out it works with the toString() method I implemented perfectly. I'm almost positive the problem has to do with the find() function in the ArraySortedList not calling my equals() method. I tried using some other functions that relied on the find() method and got the same results.

    Read the article

  • C++ linked list based tree structure. Sanely copy nodes between lists.

    - by krunk
    edit Clafification: The intention is not to remove the node from the original list. But to create an identical node (data and children wise) to the original and insert that into the new list. In other words, a "move" does not imply a "remove" from the original. endedit The requirements: Each Node in the list must contain a reference to its previous sibling Each Node in the list must contain a reference to its next sibling Each Node may have a list of child nodes Each child Node must have a reference to its parent node Basically what we have is a tree structure of arbitrary depth and length. Something like: -root(NULL) --Node1 ----ChildNode1 ------ChildOfChild --------AnotherChild ----ChildNode2 --Node2 ----ChildNode1 ------ChildOfChild ----ChildNode2 ------ChildOfChild --Node3 ----ChildNode1 ----ChildNode2 Given any individual node, you need to be able to either traverse its siblings. the children, or up the tree to the root node. A Node ends up looking something like this: class Node { Node* previoius; Node* next; Node* child; Node* parent; } I have a container class that stores these and provides STL iterators. It performs your typical linked list accessors. So insertAfter looks like: void insertAfter(Node* after, Node* newNode) { Node* next = after->next; after->next = newNode; newNode->previous = after; next->previous = newNode; newNode->next = next; newNode->parent = after->parent; } That's the setup, now for the question. How would one move a node (and its children etc) to another list without leaving the previous list dangling? For example, if Node* myNode exists in ListOne and I want to append it to listTwo. Using pointers, listOne is left with a hole in its list since the next and previous pointers are changed. One solution is pass by value of the appended Node. So our insertAfter method would become: void insertAfter(Node* after, Node newNode); This seems like an awkward syntax. Another option is doing the copying internally, so you'd have: void insertAfter(Node* after, const Node* newNode) { Node *new_node = new Node(*newNode); Node* next = after->next; after->next = new_node; new_node->previous = after; next->previous = new_node; new_node->next = next; new_node->parent = after->parent; } Finally, you might create a moveNode method for moving and prevent raw insertion or appending of a node that already has been assigned siblings and parents. // default pointer value is 0 in constructor and a operator bool(..) // is defined for the Node bool isInList(const Node* node) const { return (node->previous || node->next || node->parent); } // then in insertAfter and friends if(isInList(newNode) // throw some error and bail I thought I'd toss this out there and see what folks came up with.

    Read the article

  • Why is my WCF RIA Services custom object deserializing with an extra list member?

    - by oasasaurus
    I have been developing a Silverlight WCF RIA Services application dealing with mock financial transactions. To more efficiently send summary data to the client without going overboard with serialized entities I have created a summary class that isn’t in my EDM, and figured out how to serialize and send it over the wire to the SL client using DataContract() and DataMember(). Everything seemed to be working out great, until I tried to bind controls to a list inside my custom object. The list seems to always get deserialized with an extra, almost empty entity in it that I don’t know how to get rid of. So, here are some of the pieces. First the relevant bits from the custom object class: <DataContract()> _ Public Class EconomicsSummary Public Sub New() RecentTransactions = New List(Of Transaction) TotalAccountHistory = New List(Of Transaction) End Sub Public Sub New(ByVal enUser As EntityUser) Me.UserId = enUser.UserId Me.UserName = enUser.UserName Me.Accounts = enUser.Accounts Me.Jobs = enUser.Jobs RecentTransactions = New List(Of Transaction) TotalAccountHistory = New List(Of Transaction) End Sub <DataMember()> _ <Key()> _ Public Property UserId As System.Guid <DataMember()> _ Public Property NumTransactions As Integer <DataMember()> _ <Include()> _ <Association("Summary_RecentTransactions", "UserId", "User_UserId")> _ Public Property RecentTransactions As List(Of Transaction) <DataMember()> _ <Include()> _ <Association("Summary_TotalAccountHistory", "UserId", "User_UserId")> _ Public Property TotalAccountHistory As List(Of Transaction) End Class Next, the relevant parts of the function called to return the object: Public Function GetEconomicsSummary(ByVal guidUserId As System.Guid) As EconomicsSummary Dim objOutput As New EconomicsSummary(enUser) For Each objTransaction As Transaction In (From t As Transaction In Me.ObjectContext.Transactions.Include("Account") Where t.Account.aspnet_User_UserId = guidUserId Select t Order By t.TransactionDate Descending Take 10) objTransaction.User_UserId = objOutput.UserId objOutput.RecentTransactions.Add(objTransaction) Next objOutput.NumTransactions = objOutput.RecentTransactions.Count … Return objOutput End Function Notice that I’m collecting the NumTransactions count before serialization. Should be 10 right? It is – BEFORE serialization. The DataGrid is bound to the data source as follows: <sdk:DataGrid AutoGenerateColumns="False" Height="100" MaxWidth="{Binding ElementName=aciSummary, Path=ActualWidth}" ItemsSource="{Binding Source={StaticResource EconomicsSummaryRecentTransactionsViewSource}, Mode=OneWay}" Name="gridRecentTransactions" RowDetailsVisibilityMode="VisibleWhenSelected" IsReadOnly="True"> <sdk:DataGrid.Columns> <sdk:DataGridTextColumn x:Name="TransactionDateColumn" Binding="{Binding Path=TransactionDate, StringFormat=\{0:d\}}" Header="Date" Width="SizeToHeader" /> <sdk:DataGridTextColumn x:Name="AccountNameColumn" Binding="{Binding Path=Account.Title}" Header="Account" Width="SizeToCells" /> <sdk:DataGridTextColumn x:Name="CurrencyAmountColumn" Binding="{Binding Path=CurrencyAmount, StringFormat=\{0:c\}}" Header="Amount" Width="SizeToHeader" /> <sdk:DataGridTextColumn x:Name="TitleColumn" Binding="{Binding Path=Title}" Header="Description" Width="SizeToCells" /> <sdk:DataGridTextColumn x:Name="ItemQuantityColumn" Binding="{Binding Path=ItemQuantity}" Header="Qty" Width="SizeToHeader" /> </sdk:DataGrid.Columns> </sdk:DataGrid> You might be wondering where the ItemsSource is coming from, that looks like this: <CollectionViewSource x:Key="EconomicsSummaryRecentTransactionsViewSource" Source="{Binding Path=DataView.RecentTransactions, ElementName=EconomicsSummaryDomainDataSource}" /> When I noticed that the DataGrid had the extra row I tried outputting some data after the data source finishes loading, as follows: Private Sub EconomicsSummaryDomainDataSource_LoadedData(ByVal sender As System.Object, ByVal e As System.Windows.Controls.LoadedDataEventArgs) Handles EconomicsSummaryDomainDataSource.LoadedData If e.HasError Then System.Windows.MessageBox.Show(e.Error.ToString, "Load Error", System.Windows.MessageBoxButton.OK) e.MarkErrorAsHandled() End If Dim objSummary As EconomicsSummary = CType(EconomicsSummaryDomainDataSource.Data(0), EconomicsSummary) Dim sb As New StringBuilder("") sb.AppendLine(String.Format("Num Transactions: {0} ({1})", objSummary.RecentTransactions.Count.ToString(), objSummary.NumTransactions.ToString())) For Each objTransaction As Transaction In objSummary.RecentTransactions sb.AppendLine(String.Format("Recent TransactionId {0} dated {1} CurrencyAmount {2} NewBalance {3}", objTransaction.TransactionId.ToString, objTransaction.TransactionDate.ToString("d"), objTransaction.CurrencyAmount.ToString("c"), objTransaction.NewBalance.ToString("c"))) Next txtDebug.Text = sb.ToString() End Sub Output from that looks like this: Num Transactions: 11 (10) Recent TransactionId 2283 dated 6/1/2010 CurrencyAmount $31.00 NewBalance $392.00 Recent TransactionId 2281 dated 5/31/2010 CurrencyAmount $33.00 NewBalance $361.00 Recent TransactionId 2279 dated 5/28/2010 CurrencyAmount $8.00 NewBalance $328.00 Recent TransactionId 2277 dated 5/26/2010 CurrencyAmount $22.00 NewBalance $320.00 Recent TransactionId 2275 dated 5/24/2010 CurrencyAmount $5.00 NewBalance $298.00 Recent TransactionId 2273 dated 5/21/2010 CurrencyAmount $19.00 NewBalance $293.00 Recent TransactionId 2271 dated 5/20/2010 CurrencyAmount $20.00 NewBalance $274.00 Recent TransactionId 2269 dated 5/19/2010 CurrencyAmount $48.00 NewBalance $254.00 Recent TransactionId 2267 dated 5/18/2010 CurrencyAmount $42.00 NewBalance $206.00 Recent TransactionId 2265 dated 5/14/2010 CurrencyAmount $5.00 NewBalance $164.00 Recent TransactionId 0 dated 6/1/2010 CurrencyAmount $0.00 NewBalance $361.00 So I have a few different questions: -First and foremost, where the devil is that extra Transaction entity coming from and how do I get rid of it? Does it have anything to do with the other list of Transaction entities being serialized as part of the EconomicsSummary class (TotalAccountHistory)? Do I need to decorate the EconomicsSummary class members a little more/differently? -Second, where are the peculiar values coming from on that extra entity? PRE-POSTING UPDATE 1: I did a little checking, it looks like that last entry is the first one in the TotalAccountHistory list. Do I need to do something with CollectionDataContract()? PRE-POSTING UPDATE 2: I fixed one bug in TotalAccountHistory, since the objects weren’t coming from the database their keys weren’t unique. So I set the keys on the Transaction entities inside TotalAccountHistory to be unique and guess what? Now, after deserialization RecentTransactions contains all its original items, plus every item in TotalAccountHistory. I’m pretty sure this has to do with the deserializer getting confused by two collections of the same type. But I don’t yet know how to resolve it…

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >