Search Results

Search found 19350 results on 774 pages for 'mysql views'.

Page 472/774 | < Previous Page | 468 469 470 471 472 473 474 475 476 477 478 479  | Next Page >

  • Implementation help... Subclass NSManagedObject?

    - by Canada Dev
    I'm working on an app where I have some products that I download in a list. The downloaded products are displayed in a table and each will is showing a detail view with more information. These same products can be saved as a favorite, and for this I am using Core Data. I'd like to be able to re-use a bunch of views for displaying the products, which means the stores object and the downloaded object would have to be the same kind. Now, how would I go about best implementing the objects? Can I make a class such as this: FavoriteProduct : NSManageObject // implementation and then subclass Product : FavoriteProduct // implementation ? The CD class just doesn't give me everything. What would be the best way to merge these two object classes so I have as little work ahead of me in terms of implementing the different views for each object? Basically, I just want to be able to call the same methods, etc. on the Product objects as I would on the ones that are FavoriteProduct objects, and re-use views for both kinds. There's only a bit of difference between the two (one is of course stored as a favorite and has some extra values such as notes, tags, while the Product one doesn't). Thanks in advance

    Read the article

  • making an SQL date column visible to a Drupal view

    - by Donal
    I'm trying to make a table visible to Views. One of the columns has type date (as opposed to a Unix timestamp). The example I initially tried to copy from is in modules/comment.views.inc in the Views module: // timestamp (when comment was posted) $data['comments']['timestamp'] = array( 'title' => t('Post date'), 'help' => t('Date and time of when the comment was posted.'), 'field' => array( 'handler' => 'views_handler_field_date', 'click sortable' => TRUE, ), 'sort' => array( 'handler' => 'views_handler_sort_date', ), 'filter' => array( 'handler' => 'views_handler_filter_date', ), ); This makes the dates, which are all in the past year or so, show up as "1 Jan 1970 00:33", so evidently a value of '2010-05-12', for example, is being interpreted as 2010 seconds past 1 Jan 1970 00:00. Can anyone point me to a correct way of exporting date columns? EDIT: I'm following up on some clues found at http://drupal.org/node/476774 .

    Read the article

  • Filtering Questions (posts) by tag_name in TagController#index passing /tags?tag_name=something

    - by bgadoci
    I am trying to get my TagsController#index action to display only the Questions that contain certain tags when passing the search parameter tag_name. I am not getting an error just can't get the filter to work correctly such that upon receiving the /tags?tag_name=something url, only those questions are displayed . Here is the setup: class Tag < ActiveRecord::Base belongs_to :question def self.search(str) return [] if str.blank? cond_text = str.split.map{|w| "tag_name LIKE ? "}.join(" OR ") cond_values = str.split.map{|w| "%#{w}%"} all(:order => "created_at DESC", :conditions => (str ? [cond_text, *cond_values] : [])) end end and class Question < ActiveRecord::Base has_many :tags, :dependent => :destroy end tag link that would send the URL to the TagsController looks like this: <%= link_to(tag_name, tags_path(:tag_name => tag_name)) %> and outputs: /tags?tag_name=something In my /views/tags/index.html.erb view I am calling the questions by rendering a partial /views/questions/_question.html.erb. <%= render :partial => @questions %> When I send the URL with the search parameter nothing happens. Here is my TagsController#index action: def index @tags = Tag.search(params[:search]).paginate :page => params[:page], :per_page => 5 @tagsearch = Tag.search(params[:search]) @tag_counts = Tag.count(:group => :tag_name, :order => 'count_all DESC', :limit => 100) @questions = Question.all( :order => 'created_at DESC', :limit => 50) respond_to do |format| format.html # index.html.erb format.xml { render :xml => @tags } end end How can I properly filter questions displayed in the /views/tags/index.html.erb file for the parameter tag.tag_name?

    Read the article

  • iphone selecting a row to change view

    - by Rob J
    I have created the view Controllers and attached those to control their respective views. I can't figure out how to make the following code to go the right views though. (This is in the root view controller:) - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { RunViewController *runViewController = [[RunViewController alloc] initWithNibName:@"RunView" bundle:[NSBundle mainBundle]]; CalcViewController *calcViewController = [[CalcViewController alloc] initWithNibName:@"CalcView" bundle:[NSBundle mainBundle]]; PushViewController *pushViewController = [[PushViewController alloc] initWithNibName:@"PushView" bundle:[NSBundle mainBundle]]; SitViewController *sitViewController = [[SitViewController alloc] initWithNibName:@"SitView" bundle:[NSBundle mainBundle]]; TimerViewController *timerViewController = [[TimerViewController alloc] initWithNibName:@"TimerView" bundle:[NSBundle mainBundle]]; [self.navigationController pushViewController:runViewController animated:YES]; [runViewController release]; [self.navigationController pushViewController:pushViewController animated:YES]; [pushViewController release]; [self.navigationController pushViewController:sitViewController animated:YES]; [sitViewController release]; [self.navigationController pushViewController:timerViewController animated:YES]; [timerViewController release]; [self.navigationController pushViewController:calcViewController animated:YES]; [calcViewController release]; runViewController = nil; pushViewController = nil; sitViewController = nil; timerViewController = nil; calcViewController = nil; } Each time I select any of the rows in the table - all of the views come up. How do I arrange them to only open the view that it is supposed to?

    Read the article

  • Grouping by date, with 0 when count() yields no lines

    - by SCO
    I'm using Postgresql 9 and I'm fighting with counting and grouping when no lines are counted. Let's assume the following schema : create table views { date_event timestamp with time zone ; event_id integer; } Let's imagine the following content : 2012-01-01 00:00:05 2 2012-01-01 01:00:05 5 2012-01-01 03:00:05 8 2012-01-01 03:00:15 20 I want to group by hour, and count the number of lines. I wish I could retrieve the following : 2012-01-01 00:00:00 1 2012-01-01 01:00:00 1 2012-01-01 02:00:00 0 2012-01-01 03:00:00 2 2012-01-01 04:00:00 0 2012-01-01 05:00:00 0 . . 2012-01-07 23:00:00 0 I mean that for each time range slot, I count the number of lines in my table whose date correspond, otherwise, I return a line with a count at zero. The following will definitely not work (will yeld only lines with counted lines 0). SELECT extract ( hour from date_event ),count(*) FROM views where date_event > '2012-01-01' and date_event <'2012-01-07' GROUP BY extract ( hour from date_event ); Please note I might also need to group by minute, or by hour, or by day, or by month, or by year (multiple queries is possible of course). I can only use plain old sql, and since my views table can be very big (100M records), I try to keep performance in mind. How can this be achieved ? Thank you !

    Read the article

  • How to store and collect data for mining such information as most viewed for last 24 hours, last 7 d

    - by Kirzilla
    Hello, Let's imagine that we have high traffic project (a tube site) which should provide sorting using this options (NOT IN REAL TIME). Number of videos is about 200K and all information about videos is stored in MySQL. Number of daily video views is about 1.5KK. As instruments we have Hard Disk Drive (text files), MySQL, Redis. Views top viewed top viewed last 24 hours top viewed last 7 days top viewed last 30 days top rated last 365 days How should I store such information? The first idea is to log all visits to text files (single file per hour, for example visits_20080101_00.log). At the beginning of each hour calculate views per video for previous hour and insert this information into MySQL. Then recalculate totals (for last 24 hours) and update statistics in tables. At the beginning of every day we have to do the same but recalculate for last 7 days, last 30 days, last 365 days. This method seems to be very poor for me because we have to store information about last 365 days for each video to make correct calculations. Is there any other good methods? Probably, we have to choose another instruments for this? Thank you.

    Read the article

  • MVC route doesn't work

    - by bill
    Hi All, I have a bunch of Views in a default folder that represent single "static" pages. Everything works as advertised except i tried adding a new page yesterday.. using the exact same routing syntax and cannot for the life of me get it to work. here's is an example of a working route: routes.MapRoute( "OurProgram", // Route name "Our_Program", // URL with parameters new { controller = "Default", action = "OurProgram" } ); The filename is OurProgram and hitting http:// localhost/Our_Program/ opens the correct view which resides in the Views/Default folder. So i added another view into this folder: Views/Default/BuyNow.aspx and added the route: routes.MapRoute( "BuyNow", // Route name "Buy_Now", // URL with parameters new { controller = "Default", action = "BuyNow" } ); And this doesn't open. I have tried the "route debugger http://haacked.com/archive/2008/03/13/url-routing-debugger.aspx" and it correctly identifies the route. I am at a lose. I have tried re-creating the view.. I am using MVC.Net 2.0 and VS 10. Any help is appreciated!

    Read the article

  • I can not navigate to the next page in windows phone 7

    - by Vijay
    I am trying to navigate form one page to another page depends upon the login. If already user logged in, Welcome page should open. Else Log in Page should open. I am trying like this. The Splash Page is the start up page. This is a Splash Screen Xaml.cs: namespace NewExample.Views { public partial class SplashPage : PhoneApplicationPage { public SplashPage() { InitializeComponent(); this.DataContext = new SplashPageViewModel(); } } } This is Splash Screen View Model: Here I am check the user already logged in or not. namespace NewExample.ViewModel { public class SplashPageViewModel { public static bool isLogin = false; public SplashPageViewModel() { var rootFrame = (App.Current as App).RootFrame; if (isLogin) rootFrame.Navigate(new Uri("/Views/WelcomePage.xaml", UriKind.Relative)); else rootFrame.Navigate(new Uri("/Views/LoginPage.xaml", UriKind.Relative)); } } } But it is not working. The Splash Page only showing. This is not navigating to another page. Please help me to resolve this problem.

    Read the article

  • Additional information in ASP.Net MVC View

    - by Max Malmgren
    I am attempting to implement a custom locale service in an MVC 2 webpage. I have an interface IResourceDictionary that provides a couple of methods for accessing resources by culture. This is because I want to avoid the static classes of .Net resources. The problem is accessing the chosen IResourceDictionary from the views. I have contemplated using the ViewDataDictionary given, creating a base controller from which all my controllers inherits that adds my IResourceDictionary to the ViewData before each action executes. Then I could call my resource dictionary this way: (ViewData["Resources"] as IResourceDictionary).GetEntry(params); Admittedly, this is extremely verbose and ugly, especially in inline code as we are encouraged to use in MVC. Right now I am leaning towards static class access ResourceDictionary.GetEntry(params); because it is slightly more elegant. I also thought about adding it to my typed model for each page, which seems more robust than adding it to the ViewData.. What is the preferred way to access my ResourceDictionary from the views? All my views will be using this dictionary.

    Read the article

  • How can I combine a LINQ query with an IQueryable<Guid>

    - by John
    I have a LINQ query that uses 1 table + a large number of views. I'd like to be able to write something like this: IQueryable<Guid> mostViewedWriters; switch (datePicker) { case DatePicker.Last12Hours: mostViewedWriters = from x in context.tempMostViewed12Hours select x.GuidId; break; case DatePicker.Last24Hours: mostViewedWriters = from x in context.tempMostViewed12Hours select x.GuidId; break; case DatePicker.Last36Hours: mostViewedWriters = from x in context.tempMostViewed12Hours select x.GuidId; break; } var query = from x1 in context.Articles join x2 in context.Authors on x1.AuthorId == x2.AuthorId join x3 in mostViewedWriters on x2.AuthorId == x3.Id select new { x2.AuthorName, x1.ArticleId, x1.ArticleTitle }; The above C# is pseudo-code written to protect the innocent (me). The gist of the question is this: I have a query that is related to the results of a view. That view, however, could be one of many different views. All the views return the same data type. I thought that I might be able to create an IQueryable that would contain the Ids that I need and use that query. Alas, that effort has stalled.

    Read the article

  • An Honest look at SharePoint Web Services

    - by juanlarios
    INTRODUCTION If you are a SharePoint developer you know that there are two basic ways to develop against SharePoint. 1) The object Model 2) Web services. SharePoint object model has the advantage of being quite rich. Anything you can do through the SharePoint UI as an administrator or end user, you can do through the object model. In fact everything that is done through the UI is done through the object model behind the scenes. The major disadvantage to getting at SharePoint this way is that the code needs to run on the server. This means that all web parts, event receivers, features, etc… all of this is code that is deployed to the server. The second way to get to SharePoint is through the built in web services. There are many articles on how to manipulate web services, how to authenticate to them and interact with them. The basic idea is that a remote application or process can contact SharePoint through a web service. Lots has been written about how great these web services are. This article is written to document the limitations, some of the issues and frustrations with working with SharePoint built in web services. Ultimately, for the tasks I was given to , SharePoint built in web services did not suffice. My evaluation of SharePoint built in services was compared against creating my own WCF Services to do what I needed. The current project I'm working on right now involved several "integration points". A remote application, installed on a separate server was to contact SharePoint and perform an task or operation. So I decided to start up Visual Studio and built a DLL and basically have 2 layers of logic. An integration layer and a data layer. A good friend of mine pointed me to SOLID principles and referred me to some videos and tutorials about it. I decided to implement the methodology (although a lot of the principles are common sense and I already incorporated in my coding practices). I was to deliver this dll to the application team and they would simply call the methods exposed by this dll and voila! it would do some task or operation in SharePoint. SOLUTION My integration layer implemented an interface that defined some of the basic integration tasks that I was to put together. My data layer was about the same, it implemented an interface with some of the tasks that I was going to develop. This gave me the opportunity to develop different data layers, ultimately different ways to get at SharePoint if I needed to. This is a classic SOLID principle. In this case it proved to be quite helpful because I wrote one data layer completely implementing SharePoint built in Web Services and another implementing my own WCF Service that I wrote. I should mention there is another layer underneath the data layer. In referencing SharePoint or WCF services in my visual studio project I created a class for every web service call. So for example, if I used List.asx. I created a class called "DocumentRetreival" this class would do the grunt work to connect to the correct URL, It would perform the basic operation of contacting the service and so on. If I used a view.asmx, I implemented a class called "ViewRetrieval" with the same idea as the last class but it would now interact with all he operations in view.asmx. This gave my data layer the ability to perform multiple calls without really worrying about some of the grunt work each class performs. This again, is a classic SOLID principle. So, in order to compare them side by side we can look at both data layers and with is involved in each. Lets take a look at the "Create Project" task or operation. The integration point is described as , "dll is to provide a way to create a project in SharePoint". Projects , in this case are basically document libraries. I am to implement a way in which a remote application can create a document library in SharePoint. Easy enough right? Use the list.asmx Web service in SharePoint. So here we go! Lets take a look at the code. I added the List.asmx web service reference to my project and this is the class that contacts it:  class DocumentRetrieval     {         private ListsSoapClient _service;      d   private bool _impersonation;         public DocumentRetrieval(bool impersonation, string endpt)         {             _service = new ListsSoapClient();             this.SetEndPoint(string.Format("{0}/{1}", endpt, ConfigurationManager.AppSettings["List"]));             _impersonation = impersonation;             if (_impersonation)             {                 _service.ClientCredentials.Windows.ClientCredential.Password = ConfigurationManager.AppSettings["password"];                 _service.ClientCredentials.Windows.ClientCredential.UserName = ConfigurationManager.AppSettings["username"];                 _service.ClientCredentials.Windows.AllowedImpersonationLevel =                     System.Security.Principal.TokenImpersonationLevel.Impersonation;             }     private void SetEndPoint(string p)          {             _service.Endpoint.Address = new EndpointAddress(p);          }          /// <summary>         /// Creates a document library with specific name and templateID         /// </summary>         /// <param name="listName">New list name</param>         /// <param name="templateID">Template ID</param>         /// <returns></returns>         public XmlElement CreateLibrary(string listName, int templateID, ref ExceptionContract exContract)         {             XmlDocument sample = new XmlDocument();             XmlElement viewCol = sample.CreateElement("Empty");             try             {                 _service.Open();                 viewCol = _service.AddList(listName, "", templateID);             }             catch (Exception ex)             {                 exContract = new ExceptionContract("DocumentRetrieval/CreateLibrary", ex.GetType(), "Connection Error", ex.StackTrace, ExceptionContract.ExceptionCode.error);                             }finally             {                 _service.Close();             }                                      return viewCol;         } } There was a lot more in this class (that I am not including) because i was reusing the grunt work and making other operations with LIst.asmx, For example, updating content types, changing or configuring lists or document libraries. One of the first things I noticed about working with the built in services is that you are really at the mercy of what is available to you. Before creating a document library (Project) I wanted to expose a IsProjectExisting method. This way the integration or data layer could recognize if a library already exists. Well there is no service call or method available to do that check. So this is what I wrote:   public bool DocLibExists(string listName, ref ExceptionContract exContract)         {             try             {                 var allLists = _service.GetListCollection();                                return allLists.ChildNodes.OfType<XmlElement>().ToList().Exists(x => x.Attributes["Title"].Value ==listName);             }             catch (Exception ex)             {                 exContract = new ExceptionContract("DocumentRetrieval/GetList/GetListWSCall", ex.GetType(), "Unable to Retrieve List Collection", ex.StackTrace, ExceptionContract.ExceptionCode.error);             }             return false;         } This really just gets an XMLElement with all the lists. It was then up to me to sift through the clutter and noise and see if Document library already existed. This took a little bit of getting used to. Now instead of working with code, you are working with XMLElement response format from web service. I wrote a LINQ query to go through and find if the attribute "Title" existed and had a value of the listname then it would return True, if not False. I didn't particularly like working this way. Dealing with XMLElement responses and then having to manipulate it to get at the exact data I was looking for. Once the check for the DocLibExists, was done, I would either create the document library or send back an error indicating the document library already existed. Now lets examine the code that actually creates the document library. It does what you are really after, it creates a document library. Notice how the template ID is really an integer. Every document library template in SharePoint has an ID associated with it. Document libraries, Image Library, Custom List, Project Tasks, etc… they all he a unique integer associated with it. Well, that's great but the client came back to me and gave me some specifics that each "project" or document library, should have. They specified they had 3 types of projects. Each project would have unique views, about 10 views for each project. Each Project specified unique configurations (auditing, versioning, content types, etc…) So what turned out to be a simple implementation of creating a document library as a repository for a project, turned out to be quite involved.  The first thing I thought of was to create a template for document library. There are other ways you can do this too. Using the web Service call, you could configure views, versioning, even content types, etc… the only catch is, you have to be working quite extensively with CAML. I am not fond of CAML. I can do it and work with it, I just don't like doing it. It is quite touchy and at times it is quite tough to understand where errors were made with CAML statements. Working with Web Services and CAML proved to be quite annoying. The service call would return a generic error message that did not particularly point me to a CAML statement syntax error, or even a CAML error. I was not sure if it was a security , performance or code based issue. It was quite tough to work with. At times it was difficult to work with because of the way SharePoint handles metadata. There are "Names", "Display Name", and "StaticName" fields. It was quite tough to understand at times, which one to use. So it took a lot of trial and error. There are tools that can help with CAML generation. There is also now intellisense for CAML statements in Visual Studio that might help but ultimately I'm not fond of CAML with Web Services.   So I decided on the template. So my plan was to create create a document library, configure it accordingly and then use The Template Builder that comes with the SharePoint SDK. This tool allows you to create site templates, list template etc… It is quite interesting because it does not generate an STP file, it actually generates an xml definition and a feature you can activate and make that template available on a site or site collection. The first issue I experienced with this is that one of the specifications to this template was that the "All Documents" view was to have 2 web parts on it. Well, it turns out that using the template builder , it did not include the web parts as part of the list template definition it generated. It backed up the settings, the views, the content types but not the custom web parts. I still decided to try this even without the web parts on the page. This new template defined a new Document library definition with a unique ID. The problem was that the service call accepts an int but it only has access to the built in library int definitions. Any new ones added or created will not be available to create. So this made it impossible for me to approach the problem this way.     I should also mention that one of the nice features about SharePoint is the ability to create list templates, back them up and then create lists based on that template. It can all be done by end user administrators. These templates are quite unique because they are saved as an STP file and not an xml definition. I also went this route and tried to see if there was another service call where I could create a document library based no given template name. Nope! none.      After some thinking I decide to implement a WCF service to do this creation for me. I was quite certain that the object model would allow me to create document libraries base on a template in which an ID was required and also templates saved as STP files. Now I don't want to bother with posting the code to contact WCF service because it's self explanatory, but I will post the code that I used to create a list with custom template. public ServiceResult CreateProject(string name, string templateName, string projectId)         {             string siteurl = SPContext.Current.Site.Url;             Guid webguid = SPContext.Current.Web.ID;                        using (SPSite site = new SPSite(siteurl))             {                 using (SPWeb rootweb = site.RootWeb)                 {                     SPListTemplateCollection temps = site.GetCustomListTemplates(rootweb);                     ProcessWeb(siteurl, webguid, web => Act_CreateProject(web, name, templateName, projectId, temps));                 }//SpWeb             }//SPSite              return _globalResult;                   }         private void Act_CreateProject(SPWeb targetsite, string name, string templateName, string projectId, SPListTemplateCollection temps) {                         var temp = temps.Cast<SPListTemplate>().FirstOrDefault(x => x.Name.Equals(templateName));             if (temp != null)             {                             try                 {                                         Guid listGuid = targetsite.Lists.Add(name, "", temp);                     SPList newList = targetsite.Lists[listGuid];                     _globalResult = new ServiceResult(true, "Success", "Success");                 }                 catch (Exception ex)                 {                     _globalResult = new ServiceResult(false, (string.IsNullOrEmpty(ex.Message) ? "None" : ex.Message + " " + templateName), ex.StackTrace.ToString());                 }                                       }        private void ProcessWeb(string siteurl, Guid webguid, Action<SPWeb> action) {                        using (SPSite sitecollection = new SPSite(siteurl)) {                 using (SPWeb web = sitecollection.AllWebs[webguid]) {                     action(web);                 }                     }                  } This code is actually some of the code I implemented for the service. there was a lot more I did on Project Creation which I will cover in my next blog post. I implemented an ACTION method to process the web. This allowed me to properly dispose the SPWEb and SPSite objects and not rewrite this code over and over again. So I implemented a WCF service to create projects for me, this allowed me to do a lot more than just create a document library with a template, it now gave me the flexibility to do just about anything the client wanted at project creation. Once this was implemented , the client came back to me and said, "we reference all our projects with ID's in our application. we want SharePoint to do the same". This has been something I have been doing for a little while now but I do hope that SharePoint 2010 can have more of an answer to this and address it properly. I have been adding metadata to SPWebs through property bag. I believe I have blogged about it before. This time it required metadata added to a document library. No problem!!! I also mentioned these web parts that were to go on the "All Documents" View. I took the opportunity to configure them to the appropriate settings. There were two settings that needed to be set on these web parts. One of them was a Project ID configured in the webpart properties. The following code enhances and replaces the "Act_CreateProject " method above:  private void Act_CreateProject(SPWeb targetsite, string name, string templateName, string projectId, SPListTemplateCollection temps) {                         var temp = temps.Cast<SPListTemplate>().FirstOrDefault(x => x.Name.Equals(templateName));             if (temp != null)             {                 SPLimitedWebPartManager wpmgr = null;                               try                 {                                         Guid listGuid = targetsite.Lists.Add(name, "", temp);                     SPList newList = targetsite.Lists[listGuid];                     SPFolder rootFolder = newList.RootFolder;                     rootFolder.Properties.Add(KEY, projectId);                     rootFolder.Update();                     if (rootFolder.ParentWeb != targetsite)                         rootFolder.ParentWeb.Dispose();                     if (!templateName.Contains("Natural"))                     {                         SPView alldocumentsview = newList.Views.Cast<SPView>().FirstOrDefault(x => x.Title.Equals(ALLDOCUMENTS));                         SPFile alldocfile = targetsite.GetFile(alldocumentsview.ServerRelativeUrl);                         wpmgr = alldocfile.GetLimitedWebPartManager(PersonalizationScope.Shared);                         ConfigureWebPart(wpmgr, projectId, CUSTOMWPNAME);                                              alldocfile.Update();                     }                                        if (newList.ParentWeb != targetsite)                         newList.ParentWeb.Dispose();                     _globalResult = new ServiceResult(true, "Success", "Success");                 }                 catch (Exception ex)                 {                     _globalResult = new ServiceResult(false, (string.IsNullOrEmpty(ex.Message) ? "None" : ex.Message + " " + templateName), ex.StackTrace.ToString());                 }                 finally                 {                     if (wpmgr != null)                     {                         wpmgr.Web.Dispose();                         wpmgr.Dispose();                     }                 }             }                         }       private void ConfigureWebPart(SPLimitedWebPartManager mgr, string prjId, string webpartname)         {             var wp = mgr.WebParts.Cast<System.Web.UI.WebControls.WebParts.WebPart>().FirstOrDefault(x => x.DisplayTitle.Equals(webpartname));             if (wp != null)             {                           (wp as ListRelationshipWebPart.ListRelationshipWebPart).ProjectID = prjId;                 mgr.SaveChanges(wp);             }         }   This Shows you how I was able to set metadata on the document library. It has to be added to the RootFolder of the document library, Unfortunately, the SPList does not have a Property bag that I can add a key\value pair to. It has to be done on the root folder. Now everything in the integration will reference projects by ID's and will not care about names. My, "DocLibExists" will now need to be changed because a web service is not set up to look at property bags.  I had to write another method on the Service to do the equivalent but with ID's instead of names.  The second thing you will notice about the code is the use of the Webpartmanager. I have seen several examples online, and also read a lot about memory leaks, The above code does not produce memory leaks. The web part manager creates an SPWeb, so just dispose it like I did. CONCLUSION This is a long long post so I will stop here for now, I will continue with more comparisons and limitations in my next post. My conclusion for this example is that Web Services will do the trick if you can suffer through CAML and if you are doing some simple operations. For Everything else, there's WCF! **** fireI apologize for the disorganization of this post, I was on a bus on a 12 hour trip to IOWA while I wrote it, I was half asleep and half awake, hopefully it makes enough sense to someone.

    Read the article

  • Speedstep and Intel x5570?

    - by sajal
    Hi, My new server has 2 x X5570 CPUs. Now here is the output of grep -i hz /proc/cpuinfo model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 It always remains the same.. no matter how much load is mysql or any other app hogging. Even when mysql eats 2 or 3 CPUs at 100% each, the output of cpuinfo is the same. If fact mysql performance for some heavy inserts is poorer than my old E5430 server. Any clues? I contacted the server provider, they tried turning off SpeedStep and still we see the same results. Any insights would be helpful cause I am paying heavily for this box and would love to milk all juice i can.

    Read the article

  • Freeradius authentication failed for unknown reason

    - by Moein7tl
    I followed this instruction to force freeradius to use mysql database. and run freeradius in debug mod. but it rejects all authentication. mysql database : mysql select * from radcheck; +----+----------+-----------+----+---------+ | id | username | attribute | op | value | +----+----------+-----------+----+---------+ | 1 | test | Password | == | test123 | | 2 | test | Auth-Type | == | Local | +----+----------+-----------+----+---------+ 2 rows in set (0.02 sec) radtest command : # radtest test test123 localhost 0 testing123 Sending Access-Request of id 235 to 127.0.0.1 port 1812 User-Name = "test" User-Password = "test123" NAS-IP-Address = 127.0.0.1 NAS-Port = 0 Message-Authenticator = 0x00000000000000000000000000000000 rad_recv: Access-Reject packet from host 127.0.0.1 port 1812, id=235, length=20 radiusd debug mod log: rad_recv: Access-Request packet from host 127.0.0.1 port 51034, id=235, length=74 User-Name = "test" User-Password = "test123" NAS-IP-Address = 127.0.0.1 NAS-Port = 0 Message-Authenticator = 0xbf111cbbae24fb0f0a558bfa26f53476 # Executing section authorize from file /usr/local/etc/raddb/sites-enabled/default +- entering group authorize {...} ++[preprocess] returns ok ++[chap] returns noop ++[mschap] returns noop ++[digest] returns noop [suffix] No '@' in User-Name = "test", looking up realm NULL [suffix] No such realm "NULL" ++[suffix] returns noop [eap] No EAP-Message, not doing EAP ++[eap] returns noop ++[files] returns noop ++[expiration] returns noop ++[logintime] returns noop [pap] WARNING! No "known good" password found for the user. Authentication may fail because of this. ++[pap] returns noop ERROR: No authenticate method (Auth-Type) found for the request: Rejecting the user Failed to authenticate the user. Using Post-Auth-Type Reject # Executing group from file /usr/local/etc/raddb/sites-enabled/default +- entering group REJECT {...} [attr_filter.access_reject] expand: %{User-Name} - test attr_filter: Matched entry DEFAULT at line 11 ++[attr_filter.access_reject] returns updated Delaying reject of request 20 for 1 seconds Going to the next request Waking up in 0.9 seconds. Sending delayed reject for request 20 Sending Access-Reject of id 235 to 127.0.0.1 port 51034 Waking up in 4.9 seconds. Cleaning up request 20 ID 235 with timestamp +4325 Ready to process requests. where is the problem and how should I solve it?

    Read the article

  • How can a CentOS 6 guest running in VirtualBox be configured as a LAMP server that can be accessed from the Windows host?

    - by jtt89
    I was able to conect Centos6 on Virtual Box to Windows (I can ping in both directions) with Host-only Adapter (for connection between the two) and NAT Adapter (to enable Linux on VB to connect to the Internet). I want to set up httpd, mysql and vsftpd servers and in the end easily connect to httpd from Windows based browser and ftp server with a Windows based client as well. I would also want to have access through SSH. I have a general idea of the steps that are involved, but there is also a configuration that I am not sure about at this point. Lets say I follow these steps: yum install httpd yum install php php-pear php-mysql yum install mysql-server mysql_secure_installation yum install vsftpd yum install mod_ssl Technically I have everything installed, but what would be the next steps that I need to take (from the networking point of view, so to speak) to get it all working)? I know I need to configure, at least Apache, and ftp server, but I am not sure how is it gonna work; like where am I gonna be uloading the sites (I know this can vary), how am I gonna know what address to use in a browser if I wanna go to a website x, y, z on that installation etc. This sounds like I need to do some kind of DNS setup and I am kind of stuck at this point. If somebody could give me a general outline of what are the things that need to be done that would be great (I was looking at a lot of websites and I know about etc/sysconfig/network, httpd.config - not too much about it on Apache's site, hostname, hostname -f etc; but it is kind of hard to piece it all together at this point). I am gonna be looking at the books also, but they not always reflect the setup that I have too (VirtualBox). Thank you.

    Read the article

  • Chroot jail of Nginx and php

    - by sqren
    I'm hosting multiple websites on one VPS, and want to chroot each website, eg. /chroot/website1 /chroot/website2 I'm using makejail, which is a highlevel tool, for creating the jails, and copying the libraries and dependencies. Easy peasy. Each website will need nginx, php and mysql. For php I'm using php5-fpm which actually supports chroot by configuration, however I'm not using this (maybe I should?) My question is which approach of the following three is the better: 1) Every website will have its own seperated instance of nginx, php and mysql. The downside is, that each webserver + php has to listen to a different port. I also need a "master" nginx web server in front of them, reverse proxying to the chrooted servers behind it. Probably most secure, but also most advanced. 2) I don't make any chroot jails manually. I setup one nginx web server, that proxies php requests to php-fpm, on different ports. I can have multiple php-fpm configurations each with is own chroot'ed folder. This is quite managable - however only php will be chrooted. Not the actual webserver. Is this secure enough. Also, I tried this option out, and it seems I will need to use TCP instead of sockets for connecting to MySQL. 3) You tell me ;) I'm quite new to chroot jailing, so please correct me if I'm wrong in my assumptions. I've been reading all the tutorials I could find, however, I find the market for chroot guides very scarce. Any help or inputs much appreciated!

    Read the article

  • SOGo installation on Mail Server

    - by i.h4d35
    We run a normal mail server on cPanel for web-based email. We've just got a request to add Calendar, address book, tasks functions; mobile capabilities (I'm guessing acces via a mobile client/app); public folders etc. On the client-side, we have some people using webmail, some use MS Outlook and some others use Mozilla Thunderbird. Having looked around, I zeroed in on SOGo, Citadel and kolab as options for this. I read through SOGo's official install guide and also checked here and here. However, I see most of the HowTo's ask installation of MySQL/PgSQL, LDAP, Samba etc. While I can manage installation of Samba (if required), I have no idea if installing LDAP, MySQL etc is really required. Also, any guidance as to how to install on a regular mail server would be appreciated. Sorry if this sounds vague. If any more information is required, I'll be happy to give it. Thanks in advance. Edit: This server in question has always been governed via cPanel (to install PHP, MySQL, configure DNS etc). So I am confused if really need LDAP.

    Read the article

  • Configuring apache and php to handle many connections

    - by Marc
    My preliminary setup is like this. Two QuadCore 8GB servers running debian 6, with php and apache, One QuadCore 16GB server running debian 6, with mysql My plan is to have one 8Gb server to act as a proxy server, using vertx java to handle connections. I will let vertx use HttpClient to send web requests to the second 8GB server. This would have apache installed and use php to deliver any information that it gets from the mysql server on the third, 16GB server. The main reason I want this setup is to have things separated, so the "proxy" will be the only way to access the system, as the other two server will only be reachable from the local network. I can have the vertx proxy handle 5000+ concurrent connections, but, I don't know how to configure apache to handle all the requests coming from the proxy. Php will connect over mysqli with persistent connection pool of 500-800 connections, the mysql server seems not to have any issues on this part. In previous projects, the apache part was always causing issues, no matter how I set it up. I might not fully understand how to setup apache, since normally apache should handle many concurrent connections, but it does seem to now.

    Read the article

  • Postfix connect timing out remotely, working fine locally

    - by Moritz
    Running Postfix on Debian I cannot connect to send mail any more. It worked until approximately a week ago. I do not recall touching the configuration of the server during that time, which makes it difficult for me to find out what the problem is. When connecting from the server to itself it works fine: root@xxxx:~# telnet localhost 25 Trying 127.0.0.1... Connected to localhost.localdomain. Escape character is '^]'. ehlo localhost 220 mail.xxxx.de ESMTP Postfix (Debian/GNU) 250-mail.xxxx.de 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-STARTTLS 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN quit 221 2.0.0 Bye Connection closed by foreign host. Trying to do the same remotely times out: laptop:~ $ telnet mail.xxxx.de 25 Trying 93.xx.xx.xx... telnet: connect to address 93.xx.xx.xx: Operation timed out telnet: Unable to connect to remote host Configuration is as follows: root@xxxx:~# postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix home_mailbox = Maildir/ inet_interfaces = all inet_protocols = ipv4 mailbox_command = mailbox_size_limit = 0 mydestination = localhost.localdomain, localhost.localdomain, localhost myhostname = mail.xxxx.de mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname readme_directory = no recipient_delimiter = + relayhost = smtp_tls_note_starttls_offer = yes smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) smtpd_recipient_restrictions = permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_exceptions_networks = $mynetworks smtpd_sasl_local_domain = smtpd_sasl_path = private/auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_tls_CAfile = /etc/postfix/ssl/cacert.pem smtpd_tls_auth_only = no smtpd_tls_cert_file = /etc/postfix/ssl/smtpd.crt smtpd_tls_key_file = /etc/postfix/ssl/smtpd.key smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_tls_session_cache_timeout = 3600s smtpd_use_tls = yes tls_random_source = dev:/dev/urandom virtual_alias_maps = proxy:mysql:$config_directory/mysql_virtual_alias_maps.cf virtual_gid_maps = static:8 virtual_mailbox_base = /var/vmail virtual_mailbox_domains = proxy:mysql:$config_directory/mysql_virtual_domains_maps.cf virtual_mailbox_maps = proxy:mysql:$config_directory/mysql_virtual_mailbox_maps.cf virtual_minimum_uid = 150 virtual_transport = dovecot Receiving mails is no problem, as is retrieving them remotely. Do you have an idea what I could check next?

    Read the article

  • Unable to Access Localhost after starting Xampp

    - by user7370
    OS: Windows XP Professional, SP 2. Few days back i had xampplite 1.7.1 installed and was able to access localhost and phpmyadmin through browser...... Today it suddenly stopped working. In firefox after i type http://localhost/ nothing happens, just blank white screen. I removed all the files in xampplite folders and re-installed ver 1.7.1 again, it's of no use. Then i installed xampplite 1.7.2 (latest), which i had downloaded from xampp website, again it's of no use. Apache and MySql are running though. I tried using locally installing wordpress, as i have a theme ready and want to convert that design to wordpress, test it and start using it online. Running 'Port-check' on xampp control panel showed this -- RESULT ------ Service -- -- Port -- -- Status -- --------------------------------------------------- Apache (HTTP) -- 80 -- C:\xampplite\apache\bin\httpd.exe Apache (WebDAV) -- 81 -- free Apache (HTTPS) -- 443 -- C:\xampplite\apache\bin\httpd.exe MySQL -- 3306 -- C:\xampplite\mysql\bin\mysqld.exe FileZilla (FTP) -- 21 -- free FileZilla (Admin) -- 14147 -- free Mercury (SMTP) -- 25 -- free Mercury (POP3) -- 110 -- free Mercury (IMAP) -- 143 -- free Mercury (HTTP) -- 2224 -- free Mercury (Finger) -- 79 -- free Mercury (PH) -- 105 -- free Mercury (PopPass) -- 106 -- free Tomcat (AJP/1.3) -- 8009 -- free Tomcat (HTTP) -- 8080 -- free --------------------------------------------- I also Have skype installed but it's not using 'Port 80' (as i have read, this was the issue, but checked under skype option the port is 65013). And when i run file:///C:/xampp/htdocs/index.php - it shows "Something is wrong with the XAMPP installation :-( " Please help with this problem. thanks Sharath kumar

    Read the article

  • AWS document on number of databases allowed on an Amazon RDS instance

    - by user35042
    At the Amazon RDS FAQ there is the question "What is a database instance (DB Instance)?". The entire answer (as of mid-June 2012) is: You can think of a DB Instance as a database environment in the cloud with the compute and storage resources you specify. You can create and delete DB Instances, define/refine infrastructure attributes of your DB Instance(s), and control access and security via the AWS Management Console, Amazon RDS APIs, and Command Line Tools. Multiple MySQL databases or SQL Server databases (up to 30) or Oracle database schemas can be created on a given DB Instance. The last part of that quote, "Multiple MySQL databases or SQL Server databases (up to 30) Oracle database schemas" I interpret to mean that you can have an "unlimited" number of databases on an RDS MySQL or Oracle instance but only 30 databases on an MS SQL Server instance ("unlimited" meaning not limited by the RDS infrastructure itself). This was asked in the Stackoverflow question Does Amazon RDS support multiple databases per instance?. The answer quoted an older version of the FAQ. What I am looking for is an Amazon document that clarifies this question, or else someone who has experience using Amazon RDS who can attest what the situation actually is.

    Read the article

  • Issue with SSL using HAProxy and Nginx

    - by Ben Chiappetta
    I'm building a highly available site using a multiple HAProxy load balancers, Nginx web serves, and MySQL servers. The site needs to be able to survive load balancer or web servers nodes going offline without any interruption of service to visitors. Currently, I have two boxes running HAProxy sharing a virtual IP using keepalived, which forward to two web servers running Nginx, which then tie into two MySQL boxes using MySQL replication and sharing a virtual IP using heartbeat. Everything is working correctly except for SSL traffic over HAProxy. I'm running version 1.5 dev12 with openssl support compiled in. When I try to navigate to the virtual IP for haproxy over https, I get the message: The plain HTTP request was sent to HTTPS port. Here's my haproxy.cfg so far, which was mainly assembled from other posts: global log 127.0.0.1 local0 log 127.0.0.1 local1 notice # log 127.0.0.1 local0 user haproxy group haproxy daemon maxconn 20000 defaults log global option dontlognull balance leastconn clitimeout 60000 srvtimeout 60000 contimeout 5000 retries 3 option redispatch listen front bind :80 bind :443 ssl crt /etc/pki/tls/certs/cert.pem mode http option http-server-close option forwardfor reqadd X-Forwarded-Proto:\ https if { is_ssl } reqadd X-Proto:\ SSL if { is_ssl } server web01 192.168.25.34 check inter 1s server web02 192.168.25.32 check inter 1s stats enable stats uri /stats stats realm HAProxy\ Statistics stats auth admin:********* Any idea why SSL traffic isn't being passed correctly? Also, any other changes you would recommend? I still need to configure logging, so don't worry about that section. Thanks in advance your help.

    Read the article

  • How do I import large sql file to local LAMP (xampp) environment

    - by mraslton
    I have used Linux to import a large mysql dump file (into a new database), but am new to how the process works in a local LAMP environment using xampp, as xampp does not support SSH. I've dowloaded the large_dump_file.sql from the Linux server to my local system. I'm using Windows XP and have used xampp to setup LAMP. I am able to access the local_database via phpMyAdmin, but the dump file is too large to import using that app. I'm trying to import the file via the command prompt, but so far with no success. At the prompt: cd .. cd .. cd xampp cd mysql cd bin I've found that mysqlimport is used to import .csv and .txt files, and mysql is used to import .sql files, but can't find documentation as to whether or not to use the -u -p options so I've tried many variations of the command with no luck. What would be the proper command? I've modified the hosts, virtual-hosts conf, and apache config files. Do I need to change any other config files on my local system? Thanks.

    Read the article

  • MediaWiki migrated from Tiger to Snow Leopard throwing an exceptions

    - by Matt S
    I had an old laptop running Mac OS X 10.4 with macports for web development: Apache 2, PHP 5.3.2, Mysql 5, etc. I got a new laptop running Mac OS X 10.6 and installed macports. I installed the same web development apps: Apache 2, PHP 5.3.2, Mysql 5, etc. All versions the same as my old laptop. A Mediawiki site (version 1.15) was copied over from my old system (via the Migration Assistant). Having a fresh Mysql setup, I dumped my old database and imported it on the new system. When I try to browse to mediawiki's "Special" pages, I get the following exception thrown: Invalid language code requested Backtrace: #0 /languages/Language.php(2539): Language::loadLocalisation(NULL) #1 /includes/MessageCache.php(846): Language::getFallbackFor(NULL) #2 /includes/MessageCache.php(821): MessageCache->processMessagesArray(Array, NULL) #3 /includes/GlobalFunctions.php(2901): MessageCache->loadMessagesFile('/Users/matt/Sit...', false) #4 /extensions/OpenID/OpenID.setup.php(181): wfLoadExtensionMessages('OpenID') #5 [internal function]: OpenIDLocalizedPageName(Array, 'en') #6 /includes/Hooks.php(117): call_user_func_array('OpenIDLocalized...', Array) #7 /languages/Language.php(1851): wfRunHooks('LanguageGetSpec...', Array) #8 /includes/SpecialPage.php(240): Language->getSpecialPageAliases() #9 /includes/SpecialPage.php(262): SpecialPage::initAliasList() #10 /includes/SpecialPage.php(406): SpecialPage::resolveAlias('UserLogin') #11 /includes/SpecialPage.php(507): SpecialPage::getPageByAlias('UserLogin') #12 /includes/Wiki.php(229): SpecialPage::executePath(Object(Title)) #13 /includes/Wiki.php(59): MediaWiki->initializeSpecialCases(Object(Title), Object(OutputPage), Object(WebRequest)) #14 /index.php(116): MediaWiki->initialize(Object(Title), NULL, Object(OutputPage), Object(User), Object(WebRequest)) #15 {main} I tried to step through Mediawiki's code, but it's a mess. There are global variables everywhere. If I change the code slightly to get around the exception, the page comes up blank and there are no errors (implying there are multiple problems). Anyone else get Mediawiki 1.15 working on OS X 10.6 with macports? Anything in the migration from Tiger that could cause a problem? Any clues where to look for answers?

    Read the article

  • PHP-FPM issue on LEMP Stack and WordPress

    - by jw60660
    I'm very much a NGINX and Server Admin beginner. I used this tutorial to install NGINX / PHP / mySQL / WordPress: C3M Digital Tutorial In this tutorial the backend php-cgi setup is configured using fastcgi. php5-fpm was installed during this tutorial: apt-get install nginx-full php5-fpm php5 php5-mysql php5-apc php5-mysql php5-xsl php5-xmlrpc php5-sqlite php5-snmp php5-curl After reading that the NGINX configuration on the WordPress codec was more secure than most tutorials, I decided to use the codex configuration: WordPress NGINX configuration in Codex The Codex configuration uses php-fpm for backend php-cgi. When opening the browser I got a 502 Bad Gateway error. The error log was: "2012/06/10 21:18:27 [crit] 14009#0: *4 connect() to unix:/tmp/php-fpm.sock failed (2: No such file or directory) while connecting to upstream, client: 12.3.456.789, server: mywebsite.com, request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/tmp/php-fpm.sock:", hos t: "mywebsite.com"" In the main NGINX configuration file supplied by the codex I noticed the line starting "server unix:" in the upstream php block which point to the empty directory: # Upstream to abstract backend connection(s) for PHP. upstream php { server unix:/tmp/php-fpm.sock; # server 127.0.0.1:9000; } I checked the folder at /tmp and it was empty. Seems I missed configuring php-fpm to play with NGINX. Can someone point me in the right direction? Much appreciated!

    Read the article

  • Balancing internal services using a Cisco CSS 11501

    - by Ladadadada
    First, the background to the problem: I have a Cisco CSS11501 that I am using to load balance a few web servers. These web servers have two network interfaces, one internal and one external and we are sending the requests to the internal interface. We have the CSS configured to do NAT because our webservers need to see the client's IP address. Because the TCP packets hit the webservers with a source address on the Internet, the webserver tries to send the packet back to the client over the external interface and not through the load balancer. In order to stop these requests being sent back out to the Internet via the external interface, we added a routing rule on these boxes so that all traffic with a source address on the internet will use the load balancer as the gateway. This part works fine. What I would also like to to is use the CSS as a load balancer for internal services such as our MySQL slaves. When I do this, I run into a similar problem; the TCP connection goes from the web server to the load balancer and then from the load balancer to the MySQL slave but the CSS spoofs a source address of the original webserver. The MySQL slave then tries to send the response directly to the webserver via the internal network and not via the load balancer. The ideal solution would be to tell the CSS not to do source address spoofing on the internal network and only do it for requests originating on the Internet. Is this possible ? Failing that, is there a way of directing the load balanced traffic back through the load balancer while keeping the other traffic (say SSH) purely on the internal network ? Is there another way of using the CSS11501 to load balance internal services ?

    Read the article

< Previous Page | 468 469 470 471 472 473 474 475 476 477 478 479  | Next Page >