Search Results

Search found 31417 results on 1257 pages for 'site structure'.

Page 17/1257 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Easy way to observe user activity - how improve my database structure.

    - by Thomas
    Welcome, I need some advise to improve perfomence my web application. In the begin I had this structure of database: USER -id (Primary Key) -name -password -email .... PROFILE -user Primary Key, Foreign Key (USER) -birthday -region -photoFile ... PAGES -id (Primary Key) -user Foreign Key(USER) -page -date COMMENTS -id (Primary Key) -user Foreign Key(USER) -page Foreign Key(PAGE) -comment -date FAVOURITES_PAGES -id (Primary Key) -user Foreign Key(USER) -favourite_page Foreign Key(PAGE) -date but now one of the most important page of website is observatory, when everyone can observe activity others users. So I need select all pages, comments and favourites pages some users and display it in one list, sorted by date. For better perfomance (I think) I changed my structure to this: table USER and PROFILE without changes ACTIVITY (additional table- have common fields: user,date) -id (Primary Key) -user Foreign Key(USER) -date -page Foreign Key(PAGE) -comment Foreign Key(COMMENTS) -favourite_page Foreign Key(FAVOURITES_PAGES) PAGES -id (Primary Key) -page COMMENTS -id (Primary Key) -page Foreign Key(PAGE) -comment FAVOURITES_PAGES -id (Primary Key) -favourite_page Foreign Key(PAGE) So now it is very easy get sorted records from all tables. But I have no only foreign key to PAGES, COMMENTS and FAVOURITES_PAGES in ACTIVITY table - there is about ten Foreign Key fields and in one record only one have value, others have None: ACTIVITY id user date page comment ... 1 2 2010-02-23 None 1 2 1 2010-02-21 1 None .... It is corect solution. When I display about 40 records in one page (pagination) I must wait about one secound, but database is almost emty (a few users and about 100 records in others tables). It is depends on amount records per page - I have checked it, but why it takes too long time, becouse of relationships? The website is built in Python/Django. Any advices/opinion?

    Read the article

  • "do it all" page structure and things to watch out for?

    - by Andrew Heath
    I'm still getting my feet wet in PHP (my 1st language) and I've reached the competency level where I can code one page that handles all sorts of different related requests. They generally have a structure like this: (psuedo code) <?php include 'include/functions.php'; IF authorized IF submit (add data) ELSE IF update (update data) ELSE IF list (show special data) ELSE IF tab switch (show new area) ELSE display vanilla (show default) ELSE "must be registered/logged-in" ?> <HTML> // snip <?php echo $output; ?> // snip </HTML> and it all works nicely, and quite quickly which is cool. But I'm still sorta feeling my way in the dark... and would like some input from the pros regarding this type of page design... is it a good long-term structure? (it seems easily expanded...) are there security risks particular to this design? are there corners I should avoid painting myself into? Just curious about what lies ahead, really...

    Read the article

  • Data Security Through Structure, Procedures, Policies, and Governance

    Security Structure and Procedures One of the easiest ways to implement security is through the use of structure, in particular the structure in which data is stored. The preferred method for this through the use of User Roles, these Roles allow for specific access to be granted based on what role a user plays in relation to the data that they are manipulating. Typical data access actions are defined by the CRUD Principle. CRUD Principle: Create New Data Read Existing Data Update Existing Data Delete Existing Data Based on the actions assigned to a role assigned, User can manipulate data as they need to preform daily business operations.  An example of this can be seen in a hospital where doctors have been assigned Create, Read, Update, and Delete access to their patient’s prescriptions so that a doctor can prescribe and adjust any existing prescriptions as necessary. However, a nurse will only have Read access on the patient’s prescriptions so that they will know what medicines to give to the patients. If you notice, they do not have access to prescribe new prescriptions, update or delete existing prescriptions because only the patient’s doctor has access to preform those actions. With User Roles comes responsibility, companies need to constantly monitor data access to ensure that the proper roles have the most appropriate access levels to ensure users are not exposed to inappropriate data.  In addition this also protects rouge employees from gaining access to critical business information that could be destroyed, altered or stolen. It is important that all data access is monitored because of this threat. Security Governance Current Data Governance laws regarding security Health Insurance Portability and Accountability Act (HIPAA) Sarbanes-Oxley Act Database Breach Notification Act The US Department of Health and Human Services defines HIIPAA as a Privacy Rule. This legislation protects the privacy of individually identifiable health information. Currently, HIPAA   sets the national standards for securing electronically protected health records. Additionally, its confidentiality provisions protect identifiable information being used to analyze patient safety events and improve patient safety. In 2002 after the wake of the Enron and World Com Financial scandals Senator Paul Sarbanes and Representative Michael Oxley lead the creation of the Sarbanes-Oxley Act. This act administered by the Securities and Exchange Commission (SEC) dramatically altered corporate financial practices and data governance. In addition, it also set specific deadlines for compliance. The Sarbanes-Oxley is not a set of standard business rules and does not specify how a company should retain its records; In fact, this act outlines which pieces of data are to be stored as well as the storage duration. The Database Breach Notification Act requires companies, in the event of a data breach containing personally identifiable information, to notify all California residents whose information was stored on the compromised system at the time of the event, according to Gregory Manter. He further explains that this act is only California legislation. However, it does affect “any person or business that conducts business in California, and that owns or licenses computerized data that includes personal information,” regardless of where the compromised data is located.  This will force any business that maintains at least limited interactions with California residents will find themselves subject to the Act’s provisions. Security Policies All companies must work in accordance with the appropriate city, county, state, and federal laws. One way to ensure that a company is legally compliant is to enforce security policies that adhere to the appropriate legislation in their area or areas that they service. These types of polices need to be mandated by a company’s Security Officer. For smaller companies, these policies need to come from executives, Directors, and Owners.

    Read the article

  • PHP XPath - how to get value from result

    - by LeeTee
    I have the followng XML file from Ebay <GeteBayDetailsResponse xmlns="urn:ebay:apis:eBLBaseComponents"><Timestamp>2012-07-04T12:02:14.541Z</Timestamp><Ack>Success</Ack><Version>779</Version><Build>E779_INTL_BUNDLED_14986004_R1</Build><SiteDetails><Site>US</Site><SiteID>0</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Canada</Site><SiteID>2</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>UK</Site><SiteID>3</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Germany</Site><SiteID>77</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Australia</Site><SiteID>15</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>France</Site><SiteID>71</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>eBayMotors</Site><SiteID>100</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Italy</Site><SiteID>101</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Netherlands</Site><SiteID>146</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Spain</Site><SiteID>186</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>India</Site><SiteID>203</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>HongKong</Site><SiteID>201</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Singapore</Site><SiteID>216</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Malaysia</Site><SiteID>207</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Philippines</Site><SiteID>211</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>CanadaFrench</Site><SiteID>210</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Poland</Site><SiteID>212</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Belgium_Dutch</Site><SiteID>123</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Belgium_French</Site><SiteID>23</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Austria</Site><SiteID>16</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Switzerland</Site><SiteID>193</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><SiteDetails><Site>Ireland</Site><SiteID>205</SiteID><DetailVersion>1</DetailVersion><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></SiteDetails><UpdateTime>2009-07-09T10:48:17.000Z</UpdateTime></GeteBayDetailsResponse> I need to get the value of node SiteID where the node Site matches whatever variable I pass. The code I have is: $xml_file ='SiteDetails.xml'; $xmlDoc = new DomDocument(); $xmlDoc->load($xml_file); $xpath = new DOMXpath($xmlDoc); $siteIDList = $xpath->query("/GeteBayDetailsResponse/SiteDetails[Site=$site]/SiteID"); var_dump($siteIDList); echo $siteIDList->SiteID; I get the following result: object(DOMNodeList)#11 (0) { } Can anyone help? I want to get a value of 3.

    Read the article

  • VB .NET Passing a Structure containing an array of String and an array of Integer into a C++ DLL

    - by DanJunior
    Hi everyone, I'm having problems with marshalling in VB .NET to C++, here's the code : In the C++ DLL : struct APP_PARAM { int numData; LPCSTR *text; int *values; }; int App::StartApp(APP_PARAM params) { for (int i = 0; i < numLines; i++) { OutputDebugString(params.text[i]); } } In VB .NET : <StructLayoutAttribute(LayoutKind.Sequential)> _ Public Structure APP_PARAM Public numData As Integer Public text As System.IntPtr Public values As System.IntPtr End Structure Declare Function StartApp Lib "AppSupport.dll" (ByVal params As APP_PARAM) As Integer Sub Main() Dim params As APP_PARAM params.numData = 3 Dim text As String() = {"A", "B", "C"} Dim textHandle As GCHandle = GCHandle.Alloc(text) params.text = GCHandle.ToIntPtr(textHandle) Dim values As Integer() = {10, 20, 30} Dim valuesHandle As GCHandle = GCHandle.Alloc(values) params.values = GCHandle.ToIntPtr(heightHandle) StartApp(params) textHandle.Free() valuesHandle.Free() End Sub I checked the C++ side, the output from the OutputDebugString is garbage, the text array contains random characters. What is the correct way to do this?? Thanks a lot...

    Read the article

  • 10 steps to enable &lsquo;Anonymous Access&rsquo; for your SharePoint 2010 site

    - by KunaalKapoor
    What’s Anonymous Access? Anonymous access to your SharePoint site enables all visitors to view your SharePoint site anonymously without having to log in. With this blog I’d like to go through an easy step wise procedure to enable/set up anonymous access. Before you actually enable anonymous access on the site, you’ll have to change some settings at the web app level. So let’s start with that: Prerequisite(s): 1. A hosted SharePoint 2010 farm/server. 2. An existing SharePoint site. I just thought I’d mention the above pre-reqs, since the steps mentioned below would’nt be valid or a different type of a site. Step 1: In Central Administration, under Application Management, click on the Manage web applications. Step 2: Now select the site you want to enable anonymous access and click on the Authentication Providers icon. Step 3: On the modal window click on the Default zone. Step 4: Now under the Edit Authentication section, check Enable anonymous access and click Save. This is basically to make the Anonymous Access authentication mechanism available at the web app level @ IIS. Now, web application will allow anonymous access to be set. 5. Going back to Web Application Management click on the Anonymous Policy icon. Step 6: Also before we proceed any further, under the Anonymous Access Restrictions (@ web app mgmt.) select your Zone and set the Permissions to None – No policy and click Save. Step 7:  Now lets navigate to your top level site collection for the web application. Click the Site Actions > Site Settings. Under Users and Permissions click Site permissions. Step 8: Under Users and Permissions, click on Site Permissions. Step 9: Under the Edit tab, click on Anonymous Access. Step 10: Choose whether you want Anonymous users to have access to the entire Web site or to lists and libraries only, and then click on OK. You should now be able to see the view as below under your permissions Also keep in mind: If you are trying to access the site from a browser within the domain, then you’ll need to change some browser settings to see the after affects. Normally this is because the browsers (Internet Explorer) is set to log in automatically to intranet zone only , not sure if you have explicitly changed the zones and added it to trusted sites. If this is from a box within your domain please try to access the site by temporarily changing the Internet Explorer setting to Anonymous Logon on the zone that the site is added example "Intranet" and try . You will find the same settings by clicking on Tools > Internet Options > Security Tab.

    Read the article

  • Avoid SEO loss after URL structure change

    - by Eric Nguyen
    We recently re-wrote our site from Umbraco to WordPress. This has been done by third-party developers. I have been the project manager and it is my mistake that I haven't notice the change of URLs that affect SEO until now. New site was launch last Thursday. The old URL for a "place" (a WordPress custom post type, in case you're WordPress expert and want/ need to point me to another discussion on WP Stackexchange) page is as follows: ourdomain.com/singapore/central/alexandra/an-interesting-place Now it has been changed to ourdomain.com/places/an-interesting-place I have already requested the third-party developers to work rewriting the URLs to emulate the old URL structure. However, it's taking quite a lot of time (we have multiple custom post types e.g. events etc. so it might be complicated; the developers seem quite by blur when I first mentioned rewriting URLs for the custom post types) In the meantime, I wonder if there is a quicker work around for this 1) Use .htaccess to rewrite ourdomain.com/singapore/central/alexandra/an-interesting-place to ourdomain.com/places/an-interesting-place This should avoid 90% loss of the search traffic. I suppose I can learn how to do this quite quickly but no harm mentioning it here 2) Use rel="canonical" to indicate that ourdomain.com/places/an-interesting-place is the exact duplicate of ourdomain.com/singapore/central/alexandra/an-interesting-place I will definitely go for both approaches (and also I'm changing 404 page to cater for this temporary isue) but I wonder if 2) is even feasible and if I have missed anything. Is there anything else you could recommend me in this situation. Let me know if my question is not clear anywhere. Clarifications The old website is on a Windows Server EC2 completely separated from the Linux EC2 instance on which the new site is running. In addition, the same domain "ourdomain.com" is used here (an A record is used to point to an EC2 Elastic IP). Therefore, the old server is completely inaccessible at the moment, unless you we use the IP address to old server (which doesn't help me at all in this case). Even if the old server is accessible, I can't see where one can put the .htaccess or a HTML file to do 301 redirect here. Unless I'm successful with my approach 1) or the developers can rewrite the URLs with coding, 404 page is really a choice for me.

    Read the article

  • SEO URL structure for tag search on site

    - by Theo G
    I am looking to add tags to each product on my site e.g. brown, x products under £x, second hand x, refurbished x etc. Once you click these tags it will then search for other tags that are similar. I was thinking of using a url structure of www.site.com/tags/this%is%the%tag%name and then simply have a page that shows the results of all the products with that tag. I heard a while back that google generally ignores or downgrades anything with ‘search’ in the url and was wondering if anyone had any experience with this? Also, would you say /tags/ is a pretty valid destination or is it best to break it down and add more levels e.g. /product-type/product%variation Thanks in advance!

    Read the article

  • Directory structure for a website (js/css/img folders)

    - by nightcoder
    For years I've been using the following directory structure for my websites: <root> ->js ->jquery.js ->tooltip.js ->someplugin.js ->css ->styles.css ->someplugin.css ->images -> all website images... it seemed perfectly fine to me until I began to use different 3rd-party components. For example, today I've downloaded a datetime picker javascript component that looks for its images in the same directory where its css file is located (css file contains urls like "url('calendar.png')"). So now I have 3 options: 1) put datepicker.css into my css directory and put its images along. I don't really like this option because I will have both css and image files inside the css directory and it is weird. Also I might meet files from different components with the same name, such as 2 different components, which link to background.png from their css files. I will have to fix those name collisions (by renaming one of the files and editing the corresponding file that contains the link). 2) put datepicker.css into my css directory, put its images into the images directory and edit datepicker.css to look for the images in the images directory. This option is ok but I have to spend some time to edit 3rd-party components to fit them to my site structure. Again, name collisions may occur here (as described in the previous option) and I will have to fix them. 3) put datepicker.js, datepicker.css and its images into a separate directory, let's say /3rdParty/datepicker/ and place the files as it was intended by the author (i.e., for example, /3rdParty/datepicker/css/datepicker.css, /3rdParty/datepicker/css/something.png, etc.). Now I begin to think that this option is the most correct. Experienced web developers, what do you recommend?

    Read the article

  • SQL SERVER – Get Directory Structure using Extended Stored Procedure xp_dirtree

    - by pinaldave
    Many years ago I wrote article SQL SERVER – Get a List of Fixed Hard Drive and Free Space on Server where I demonstrated using undocumented Stored Procedure to find the drive letter in local system and available free space. I received question in email from reader asking if there any way he can list directory structure within the T-SQL. When I inquired more he suggested that he needs this because he wanted set up backup of the data in certain structure. Well, there is one undocumented stored procedure exists which can do the same. However, please be vary to use any undocumented procedures. xp_dirtree 'C:\Windows' Execution of the above stored procedure will give following result. If you prefer you can insert the data in the temptable and use the same for further use. Here is the quick script which will insert the data into the temptable and retrieve from the same. CREATE TABLE #TempTable (Subdirectory VARCHAR(512), Depth INT); INSERT INTO #TempTable (Subdirectory, Depth) EXEC xp_dirtree 'C:\Windows' SELECT Subdirectory, Depth FROM #TempTable; DROP TABLE #TempTable; Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Stored Procedure, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Sitemap structure for network of subdomains

    - by HaCos
    I am working on a project that's a network of 2 domains (domain.gr & domain.com.cy) of subdomains similar to Hubpages [each user gets a profile under a different subdomain & there is a personal blog for that user as well] and I think there is something wrong with our sitemap submission. Sometimes it takes weeks in order a new profiles to get indexed. We make use of one Webmasters account in order to manage all network and we don't want to create different accounts for each subdomain since there are more than 1000 already. According to this post http://goo.gl/KjMCjN, I end up on a structure of 5 sitemaps with the following structure : 1st sitemap will be for indexing the others. 2nd sitemap for all users profile under the domain.gr 3nd sitemap for all users profile under the domain.com.cy 4th sitemap for all posts under the *.domain.gr - news sitemap http://goo.gl/8A8S9f 5th sitemap for all posts under the *.domain.com.cy - news sitemap again Now my questions: Should we create news sitemaps or just list all post in 2nd & 3rd sitemap? Does links ordering has anything to do? Eg: Most recent user created be first in sitemap or doesn't make any different and we just need to make sure that lastmod date is correct? Does anyone guess how Hubpages submit their sitemap in Webmasters so maybe we could follow there way? Any alternative or better way to index this kind of schema? PS: Our network is multi language - Greek & English are available. We make use of hreflang tags on the head of each page to separate country target of each version.

    Read the article

  • Count function on tree structure (non-binary)

    - by Spevy
    I am implementing a tree Data structure in c# based (largely on Dan Vanderboom's Generic implementation). I am now considering approach on handling a Count property which Dan does not implement. The obvious and easy way would be to use a recursive call which Traverses the tree happily adding up nodes (or iteratively traversing the tree with a Queue and counting nodes if you prefer). It just seems expensive. (I also may want to lazy load some of my nodes down the road). I could maintain a count at the root node. All children would traverse up to and/or hold a reference to the root, and update a internally settable count property on changes. This would push the iteration problem to when ever I want to break off a branch or clear all children below a given node. Generally less expensive, and puts the heavy lifting what I think will be less frequently called functions. Seems a little brute force, and that usually means exception cases I haven't thought of yet, or bugs if you prefer. Does anyone have an example of an implementation which keeps a count for an Unbalanced and/or non-binary tree structure rather than counting on the fly? Don't worry about the lazy load, or language. I am sure I can adjust the example to fit my specific needs. EDIT: I am curious about an example, rather than instructions or discussion. I know this is not technically difficult...

    Read the article

  • Data Structure for Small Number of Agents in a Relatively Big 2D World

    - by Seçkin Savasçi
    I'm working on a project where we will implement a kind of world simulation where there is a square 2D world. Agents live on this world and make decisions like moving or replicating themselves based on their neighbor cells(world=grid) and some extra parameters(which are not based on the state of the world). I'm looking for a data structure to implement such a project. My concerns are : I will implement this 3 times: sequential, using OpenMP, using MPI. So if I can use the same structure that will be quite good. The first thing comes up is keeping a 2D array for the world and storing agent references in it. And simulate the world for each time slice by checking every cell in each iteration and further processing if an agents is found in the cell. The downside is what if I have 1000x1000 world and only 5 agents in it. It will be an overkill for both sequential and parallel versions to check each cell and look for possible agents in them. I can use quadtree and store agents in it, but then how can I get the information about neighbor cells then? Please let me know if I should elaborate more.

    Read the article

  • How to Structure a Trinary state in DB and Application

    - by ABMagil
    How should I structure, in the DB especially, but also in the application, a trinary state? For instance, I have user feedback records which need to be reviewed before they are presented to the general public. This means a feedback reviewer must see the unreviewed feedback, then approve or reject them. I can think of a couple ways to represent this: Two boolean flags: Seen/Unseen and Approved/Rejected. This is the simplest and probably the smallest database solution (presumably boolean fields are simple bits). The downside is that there are really only three states I care about (unseen/approved/rejected) and this creates four states, including one I don't care about (a record which is seen but not approved or rejected is essentially unseen). String column in the DB with constants/enum in application. Using Rating::APPROVED_STATE within the application and letting it equal whatever it wants in the DB. This is a larger column in the db and I'm concerned about doing string comparisons whenever I need these records. Perhaps mitigatable with an index? Single boolean column, but allow nulls. A true is approved, a false is rejected. A null is unseen. Not sure the pros/cons of this solution. What are the rules I should use to guide my choice? I'm already thinking in terms of DB size and the cost of finding records based on state, as well as the readability of code the ends up using this structure.

    Read the article

  • How to determine the correct (case sensitive) URL for a SharePoint site

    - by Goyuix
    SharePoint is generally very tolerant of accepting a URL in a case-insensitive fashion, however there are a few cases where it completely breaks down. For example, when creating a site column it somehow stores and uses the URL when it was created, and when trying to edit the field definition through the Site Column Gallery (fldedit.aspx page in the LAYOUTS) you end up throwing the error below. Value does not fall within the expected range. at Microsoft.SharePoint.SPFieldCollection.GetFieldByInternalName(String strName, Boolean bThrowException) at Microsoft.SharePoint.SPFieldCollection.GetFieldByInternalName(String strName) at Microsoft.SharePoint.ApplicationPages.BasicFieldEditPage.OnLoad(EventArgs e) at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) How can I reliably get the correct URL for a site/web? The SPSite.Url and SPWeb.Url properties seem to return back whatever case they are instantiated with. In other words, the site collection is provisioned using the following URL: http://server/Path/Site If I create a new Site Column using the SharePoint object model and happen to use http://server/path/site when instantiating the SPSite and SPWeb objects, the site column will be made available but when trying to access it through the gallery the error above is generated. If I correct the URL in the address bar, I can still view/modify the definition for the SPField in question, but the default URL that is generated is bogus. Clear as mud? Example code: (this is a bad example because of the case sensitivity issue) // note: site should be partially caps: http://server/Path/Site using (SPSite site = new SPSite("http://server/path/site")) { using (SPWeb web = site.OpenWeb()) { web.Fields.AddFieldAsXml("..."); // correct XML really here } }

    Read the article

  • How to create managed properties at site collection level in SharePoint2013

    - by ybbest
    In SharePoint2013, you can create managed properties at site collection. Today, I’d like to show you how to do so through PowerShell. 1. Define your managed properties and crawled properties and managed property Type in an external csv file. PowerShell script will read this file and create the managed and the mapping. 2. As you can see I also defined variant Type, this is because you need the variant type to create the crawled property. In order to have the crawled properties, you need to do a full crawl and also make sure you have data populated for your custom column. However, if you do not want to a full crawl to create those crawled properties, you can create them yourself by using the PowerShell; however you need to make sure the crawled properties you created have the same name if created by a full crawl. Managed properties type: Text = 1 Integer = 2 Decimal = 3 DateTime = 4 YesNo = 5 Binary = 6 Variant Type: Text = 31 Integer = 20 Decimal = 5 DateTime = 64 YesNo = 11 3. You can use the following script to create your managed properties at site collection level, the differences for creating managed property at site collection level is to pass in the site collection id. param( [string] $siteUrl="http://SP2013/", [string] $searchAppName = "Search Service Application", $ManagedPropertiesList=(IMPORT-CSV ".\ManagedProperties.csv") ) Add-PSSnapin Microsoft.SharePoint.PowerShell -ErrorAction SilentlyContinue $searchapp = $null function AppendLog { param ([string] $msg, [string] $msgColor) $currentDateTime = Get-Date $msg = $msg + " --- " + $currentDateTime if (!($logOnly -eq $True)) { # write to console Write-Host -f $msgColor $msg } # write to log file Add-Content $logFilePath $msg } $scriptPath = Split-Path $myInvocation.MyCommand.Path $logFilePath = $scriptPath + "\CreateManagedProperties_Log.txt" function CreateRefiner {param ([string] $crawledName, [string] $managedPropertyName, [Int32] $variantType, [Int32] $managedPropertyType,[System.GUID] $siteID) $cat = Get-SPEnterpriseSearchMetadataCategory –Identity SharePoint -SearchApplication $searchapp $crawledproperty = Get-SPEnterpriseSearchMetadataCrawledProperty -Name $crawledName -SearchApplication $searchapp -SiteCollection $siteID if($crawledproperty -eq $null) { Write-Host AppendLog "Creating Crawled Property for $managedPropertyName" Yellow $crawledproperty = New-SPEnterpriseSearchMetadataCrawledProperty -SearchApplication $searchapp -VariantType $variantType -SiteCollection $siteID -Category $cat -PropSet "00130329-0000-0130-c000-000000131346" -Name $crawledName -IsNameEnum $false } $managedproperty = Get-SPEnterpriseSearchMetadataManagedProperty -Identity $managedPropertyName -SearchApplication $searchapp -SiteCollection $siteID -ErrorAction SilentlyContinue if($managedproperty -eq $null) { Write-Host AppendLog "Creating Managed Property for $managedPropertyName" Yellow $managedproperty = New-SPEnterpriseSearchMetadataManagedProperty -Name $managedPropertyName -Type $managedPropertyType -SiteCollection $siteID -SearchApplication $searchapp -Queryable:$true -Retrievable:$true -FullTextQueriable:$true -RemoveDuplicates:$false -RespectPriority:$true -IncludeInMd5:$true } $mappedProperty = $crawledproperty.GetMappedManagedProperties() | ?{$_.Name -eq $managedProperty.Name } if($mappedProperty -eq $null) { Write-Host AppendLog "Creating Crawled -> Managed Property mapping for $managedPropertyName" Yellow New-SPEnterpriseSearchMetadataMapping -CrawledProperty $crawledproperty -ManagedProperty $managedproperty -SearchApplication $searchapp -SiteCollection $siteID } $mappedProperty = $crawledproperty.GetMappedManagedProperties() | ?{$_.Name -eq $managedProperty.Name } #Get-FASTSearchMetadataCrawledPropertyMapping -ManagedProperty $managedproperty } $searchapp = Get-SPEnterpriseSearchServiceApplication $searchAppName $site= Get-SPSite $siteUrl $siteId=$site.id Write-Host "Start creating Managed properties" $i = 1 FOREACH ($property in $ManagedPropertiesList) { $propertyName=$property.managedPropertyName $crawledName=$property.crawledName $managedPropertyType=$property.managedPropertyType $variantType=$property.variantType Write-Host $managedPropertyType Write-Host "Processing managed property $propertyName $($i)..." $i++ CreateRefiner $crawledName $propertyName $variantType $managedPropertyType $siteId Write-Host "Managed property created " $propertyName } Key Concepts Crawled Properties: Crawled properties are discovered by the search index service component when crawling content. Managed Properties: Properties that are part of the Search user experience, which means they are available for search results, advanced search, and so on, are managed properties. Mapping Crawled Properties to Managed Properties: To make a crawled property available for the Search experience—to make it available for Search queries and display it in Advanced Search and search results—you must map it to a managed property. References Administer search in SharePoint 2013 Preview Managing Metadata New-SPEnterpriseSearchMetadataCrawledProperty New-SPEnterpriseSearchMetadataManagedProperty Remove-SPEnterpriseSearchMetadataManagedProperty Overview of crawled and managed properties in SharePoint 2013 Preview Remove-SPEnterpriseSearchMetadataManagedProperty SharePoint 2013 – Search Service Application

    Read the article

  • How to create managed properties at site collection level in SharePoint2013

    - by ybbest
    In SharePoint2013, you can create managed properties at site collection. Today, I’d like to show you how to do so through PowerShell. 1. Define your managed properties and crawled properties and managed property Type in an external csv file. PowerShell script will read this file and create the managed and the mapping. 2. As you can see I also defined variant Type, this is because you need the variant type to create the crawled property. In order to have the crawled properties, you need to do a full crawl and also make sure you have data populated for your custom column. However, if you do not want to a full crawl to create those crawled properties, you can create them yourself by using the PowerShell; however you need to make sure the crawled properties you created have the same name if created by a full crawl. Managed properties type: Text = 1 Integer = 2 Decimal = 3 DateTime = 4 YesNo = 5 Binary = 6 Variant Type: Text = 31 Integer = 20 Decimal = 5 DateTime = 64 YesNo = 11 3. You can use the following script to create your managed properties at site collection level, the differences for creating managed property at site collection level is to pass in the site collection id. param( [string] $siteUrl="http://SP2013/", [string] $searchAppName = "Search Service Application", $ManagedPropertiesList=(IMPORT-CSV ".\ManagedProperties.csv") ) Add-PSSnapin Microsoft.SharePoint.PowerShell -ErrorAction SilentlyContinue $searchapp = $null function AppendLog { param ([string] $msg, [string] $msgColor) $currentDateTime = Get-Date $msg = $msg + " --- " + $currentDateTime if (!($logOnly -eq $True)) { # write to console Write-Host -f $msgColor $msg } # write to log file Add-Content $logFilePath $msg } $scriptPath = Split-Path $myInvocation.MyCommand.Path $logFilePath = $scriptPath + "\CreateManagedProperties_Log.txt" function CreateRefiner {param ([string] $crawledName, [string] $managedPropertyName, [Int32] $variantType, [Int32] $managedPropertyType,[System.GUID] $siteID) $cat = Get-SPEnterpriseSearchMetadataCategory –Identity SharePoint -SearchApplication $searchapp $crawledproperty = Get-SPEnterpriseSearchMetadataCrawledProperty -Name $crawledName -SearchApplication $searchapp -SiteCollection $siteID if($crawledproperty -eq $null) { Write-Host AppendLog "Creating Crawled Property for $managedPropertyName" Yellow $crawledproperty = New-SPEnterpriseSearchMetadataCrawledProperty -SearchApplication $searchapp -VariantType $variantType -SiteCollection $siteID -Category $cat -PropSet "00130329-0000-0130-c000-000000131346" -Name $crawledName -IsNameEnum $false } $managedproperty = Get-SPEnterpriseSearchMetadataManagedProperty -Identity $managedPropertyName -SearchApplication $searchapp -SiteCollection $siteID -ErrorAction SilentlyContinue if($managedproperty -eq $null) { Write-Host AppendLog "Creating Managed Property for $managedPropertyName" Yellow $managedproperty = New-SPEnterpriseSearchMetadataManagedProperty -Name $managedPropertyName -Type $managedPropertyType -SiteCollection $siteID -SearchApplication $searchapp -Queryable:$true -Retrievable:$true -FullTextQueriable:$true -RemoveDuplicates:$false -RespectPriority:$true -IncludeInMd5:$true } $mappedProperty = $crawledproperty.GetMappedManagedProperties() | ?{$_.Name -eq $managedProperty.Name } if($mappedProperty -eq $null) { Write-Host AppendLog "Creating Crawled -> Managed Property mapping for $managedPropertyName" Yellow New-SPEnterpriseSearchMetadataMapping -CrawledProperty $crawledproperty -ManagedProperty $managedproperty -SearchApplication $searchapp -SiteCollection $siteID } $mappedProperty = $crawledproperty.GetMappedManagedProperties() | ?{$_.Name -eq $managedProperty.Name } #Get-FASTSearchMetadataCrawledPropertyMapping -ManagedProperty $managedproperty } $searchapp = Get-SPEnterpriseSearchServiceApplication $searchAppName $site= Get-SPSite $siteUrl $siteId=$site.id Write-Host "Start creating Managed properties" $i = 1 FOREACH ($property in $ManagedPropertiesList) { $propertyName=$property.managedPropertyName $crawledName=$property.crawledName $managedPropertyType=$property.managedPropertyType $variantType=$property.variantType Write-Host $managedPropertyType Write-Host "Processing managed property $propertyName $($i)..." $i++ CreateRefiner $crawledName $propertyName $variantType $managedPropertyType $siteId Write-Host "Managed property created " $propertyName } Key Concepts Crawled Properties: Crawled properties are discovered by the search index service component when crawling content. Managed Properties: Properties that are part of the Search user experience, which means they are available for search results, advanced search, and so on, are managed properties. Mapping Crawled Properties to Managed Properties: To make a crawled property available for the Search experience—to make it available for Search queries and display it in Advanced Search and search results—you must map it to a managed property. References Administer search in SharePoint 2013 Preview Managing Metadata

    Read the article

  • SharePoint 2010 Hosting :: Error – HTTP Error 401.1 when Accessing Your SharePoint 2010 Site

    - by mbridge
    When attempting to view a MOSS (SharePoint) 2007 or SharePoint 2010 site locally from a Web Front End (WFE) you get an error stating: “HTTP Error 401.1 – Unauthorized: Access is denied due to invalid credentials.” I have noticed that this happens on Windows 2003/2008 Server SP1/SP2/R2 when using Host Headers and Alternate Access Mappings on a web application in MOSS 2007. If you can access the site from remote machines and cannot access the site from the server itself, then this might be your issue. For all my newer farm installs this includes SharePoint 2007 (MOSS) and SharePoint 2010. I use method number 2 on all SharePoint and SQL Servers in the farm. If you cannot access the web site locally or remotely from other machines then there is an issue with security on the site and/or possibly a Kerberos related security issue I implemented fix #2 listed in the following Microsoft KB Article. I implemented this fix on all servers in the MOSS 2007 Farm (WFE’s and Indexing/Search Server). If using method 1, you would add all Host Headers and Alternate Access Mappings for all web applications to the BackConnectionHostNames value, then you will be able to access the sites locally from the WFE’s. Microsoft KB Link: http://support.microsoft.com/kb/896861 Method 1: Specify Host Names Please follow this steps: 1. Click Start, click Run, type regedit, and then click OK. 2. In Registry Editor, locate and then click the following registry key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\MSV1_0 3. Right-click MSV1_0, point to New, and then click Multi-String Value. 4. Type BackConnectionHostNames, and then press ENTER. 5. Right-click BackConnectionHostNames, and then click Modify. 6. In the Value data box, type the host name or the host names for the sites that are on the local computer, and then click OK. 7. Quit Registry Editor, and then restart the IISAdmin service. Method 2: Disable the Loopback Check  Please follow this steps: 1. Click Start, click Run, type regedit, and then click OK 2. In Registry Editor, locate and then click the following registry key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa 3. Right-click Lsa, point to New, and then click DWORD Value. 4. Type DisableLoopbackCheck, and then press ENTER. 5. Right-click DisableLoopbackCheck, and then click Modify. 6. In the Value data box, type 1, and then click OK. 7. Quit Registry Editor, and then restart your computer. Give it try and good luck.

    Read the article

  • Redirect all access requests to a domain and subdomain(s) except from specific IP address? [closed]

    - by Christopher
    This is a self-answered question... After much wrangling I found the magic combination of mod_rewrite rules so I'm posting here. My scenario is that I have two domains - domain1.com and domain2.com - both of which are currently serving identical content (by way of a global 301 redirect from domain1 to domain2). Domain1 was then chosen to be repurposed to be a 'portal' domain - with a corporate CMS-based site leading off from the front page, and the existing 'retail' domain (domain2) left to serve the main web site. In addition, a staging subdomain was created on domain1 in order to prepare the new corporate site without impinging on the root domain's existing operation. I contemplated just rewriting all requests to domain2 and setting up the new corporate site 'behind the scenes' without using a staging domain, but I usually use subdomains when setting up new sites. Finally, I required access to the 'actual' contents of the domains and subdomains - i.e., to not be redirected like all other visitors - in order that I can develop the new site and test it in the staging environment on the live server, as I'm not using a separate development webserver in this case. I also have another test subdomain on domain1 which needed to be preserved. The way I eventually set it up was as follows: (10.2.2.1 would be my home WAN IP) .htaccess in root of domain1 RewriteEngine On RewriteCond %{REMOTE_ADDR} !^10\.2\.2\.1 RewriteCond %{HTTP_HOST} !^staging.domain1.com$ [NC] RewriteCond %{HTTP_HOST} !^staging2.domain1.com$ [NC] RewriteRule ^(.*)$ http://domain2.com/$1 [R=301] .htaccess in staging subdomain on domain1: RewriteEngine On RewriteCond %{REMOTE_ADDR} !^10\.2\.2\.1 RewriteCond %{HTTP_HOST} ^staging.revolver.coop$ [NC] RewriteRule ^(.*)$ http://domain2.com/$1 [R=301,L] The multiple .htaccess files and multiple rulesets require more processing overhead and longer iteration as the visitor is potentially redirected twice, however I find it to be a more granular method of control as I can selectively allow more than one IP address access to individual staging subdomain(s) without automatically granting them access to everything else. It also keeps the rulesets fairly simple and easy to read. (or re-interpret, because I'm always forgetting how I put rules together!) If anybody can suggest a more efficient way of merging all these rules and conditions into just one main ruleset in the root of domain1, please post! I'm always keen to learn, this post is more my attempt to preserve this information for those who are looking to redirect entire domains for all visitors except themselves (for design/testing purposes) and not just denying specific file access for maintenance mode (there are many good examples of simple mod_rewrite rules for 'maintenance mode' style operation easily findable via Google). You can also extend the IP address detection - firstly by using wildcards ^10\.2\.2\..*: the last octet's \..* denotes the usual "." and then "zero or more arbitrary characters", signified by the .* - so you can specify specific ranges of IPs in a subnet or entire subnets if you wish. You can also use square brackets: ^10\.2\.[1-255]\.[120-140]; ^10\.2\.[1-9]?[0-9]\.; ^10\.2\.1[0-1][0-9]\. etc. The third way, if you wish to specify multiple discrete IP addresses, is to bracket them in the style of ^(1.1.1.1|2.2.2.2|3.3.3.3)$, and you can of course use square brackets to substitute octets or single digits again. NB: if you're using individual RewriteCond lines to specify multiple IPs / ranges, make sure to put [OR] at the end of each one otherwise mod_rewrite will interpret as "if IP address matches 1.1.1.1 AND if IP address matches 2.2.2.2... which is of course impossible! However as far as I'm aware this isn't necessary if you're using the ! negator to specify "and is not...". Kudos also to SE: this older question also came in useful when I was verifying my own knowledge prior to my futzing around with code. This page was helpful, as were the various other links posted below (can't hyperlink them all due to spam protection... other regex checkers are available). The AddedBytes cheat sheet's useful to pin up on your wall. Other referenced URLs: internetofficer.com/seo-tool/regex-tester/ fantomaster.com/faarticles/rewritingurls.txt internetofficer.com/seo-tool/regex-tester/ addedbytes.com/cheat-sheets/mod_rewrite-cheat-sheet/

    Read the article

  • Migrate ASP.Net web site from IIS6 to IIS7

    - by David.Chu.ca
    I have to migrate an ASP.Net web site from IIS6 to IIS7. I tried to copy the all files for a web site from IIS6 (c:\inetpub\wwwroot\MySite) to another box with Windows Server 2008 R2 where IIS7 is the default web server. However, the simply copy seems not working. Should I rebuild the web site for IIS7 or should I make changes on the new box with IIS7 such as web.config? Thanks for the comments. Further investigation I found that http handers seems caused exception: <!--httpHandlers> <add path="Reserved.ReportViewerWebControl.axd" verb="*" type="Microsoft.Reporting.WebForms.HttpHandler, Microsoft.ReportViewer.WebForms, Version=8.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" validate="false"/> </httpHandlers--> After I comment out the above handler in web.config, the web page works fine. This is just my initial test. I am not sure if I should rebuild the web site from source codes or not. If so, do I need to specify for IIS7?

    Read the article

  • Changing IP address in IIS for SharePoint site results in Directory listing error

    - by Dan
    I have a server here that has 2 roles. One is Exchange 2007 and the other is MOSS 2007. In IIS i have a site, go.domain.com which has our OWA. The other is internal.domain.com which is the MOSS site. I have given the NIC local IPs and each site is using host headers. The GO site has an SSL cert from NetSol, and the MOSS site has a self signed. Right now going to either shows the NetSol site, which browsers complain about when going to the internal.domain.com site, obviously, since they are on the same IP in IIS. Both sites have always run off the original IP of 10.0.0.3 in IIS. When i added the second IP to the NIC, (10.0.0.6) and changed the Sharepoint site in IIS to use this for http and https access, I now get this message in a browser when trying to connect. Directory Listing Denied This Virtual Directory does not allow contents to be listed. Changing the IP back to 10.0.0.3 and the internal site is back up. What am I missing here? Do i need to fool around with Alternate Access Mappings in Central Admin? Am i completely missing the point with multiple SSL certs and host headers?

    Read the article

  • index.html in subdirectory will not open

    - by Dušan
    I want to have a website structure like this example.com/ - homepage example.com/solutions/ - this will be the solution parent page example.com/solutions/solution-one - child solution page example.com/solutions/solution-two- child solution page i have setup the example.com/solutions/index.html file so it can be opened as a parent page, but is shows me an error You don't have permission to access /solutions/.html on this server. What is the problem to this, how can i open parent, directory page? I na just using regular html pages, no cms or anything...

    Read the article

  • Using tar, the entire folder structure is including, I don't want that

    - by Blankman
    I am taring a folder and for some reason the entire directory structure that preceds the folder I am tarring is included. I am doing this in a script like: 'tar czf ' + dir + '/asdf.tgz ' + dir + 'asdf/' Where dir is like: /Downloads/archive/ In the man pages, I see I can fix this but I can't get it to work. I tried: tar czf -C dir ... But now I have some kind of a file -C in my folder (which I can't seem to delete btw!). Please help!

    Read the article

  • How to publishing access DB to https SharePoint2010 site with self-signed certificate

    - by ybbest
    If you are having troubles (shown below) when you publish your access database to https SharePoint2010 site with self-signed certificate. Problem: First you are getting a warning see the screenshot below: And then getting the error message: Solution: The error “The name of the security certificate is invalid or does not match the name of the site” comes when the ‘common name’ in the certificate doesn’t match the address you provided in browser to access the site. To fix the problem , you need to use script to generate the certificate rather than using the IIS UI, this is because it will default the common name to the server name and you will have the above problem when using that certificate to a different host-name web application. You can use SelfSSl.exe (IIS 6.0 only), you have to specify common name(cn), for example as: selfssl.exe /T /N:cn=testsharepoint.com /K:1024 /V:7 /S:1 /P:443 OR you can use makecert (IIS7.0 and above) makecert -r -pe -n 'CN=my.domain.here' -b 01/01/2000 -e 01/01/2036 -eku 1.3.6.1.5.5.7.3.1 -ss my -sr localMachine -sky exchange -sp "Microsoft RSA SChannel Cryptographic Provider" -sy 12 After you have created the certificate, you then need to add that self-signed certificate to your IIS web site and to the Trusted Root Certification Authorities. (To get to there, Key-in Windows + R and Type mmc.exe and add the certifications console) I have compiled the solution from the questions I have asked in sharepointstackexchange

    Read the article

  • Object detection in bitmap JavaScript canvas

    - by fallenAngel
    I want to detect clicks on canvas elements which are drawn using paths. So far I have stored element paths in a JavaScript data structure and then check the coordinates of hits which match the element's coordinates. Rendering each element path and checking the hits would be inefficient when there are a lot of elements. I believe there must be an algorithm for this kind of coordinate search, can anyone help me with this?

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >