Search Results

Search found 22947 results on 918 pages for 'xml schema collection'.

Page 303/918 | < Previous Page | 299 300 301 302 303 304 305 306 307 308 309 310  | Next Page >

  • Order SimpleXMLElement by element

    - by ron
    Hi everyone I need to order a SimpleXMLElement by a node called padre like this. $xml = new SimpleXMLElement('sitemap.xml',null,true); sort($xml, "padre" ); foreach($xml as $url ) { echo "loc}' title='".rescue_name($url-loc)."'".rescue_name($url-loc)." {$url-padre}"; } I don't know why is not working please let me know what is my error. Thanks in advance

    Read the article

  • XQuery: Inserting Nodes

    - by CoolGravatar
    I'm reading in an XML file using XQuery and want to insert several nodes/elements and generate a new XML file. How can I accomplish this? I've tried using the replace() function, but, it looks like all my XML tags are being stripped when I call doc() to load my document. So calling replace() isn't any good if my XML tags are being removed. Any help? Are there other technologies I can use?

    Read the article

  • Normalizing Item Names & Synonyms

    - by RabidFire
    Consider an e-commerce application with multiple stores. Each store owner can edit the item catalog of his store. My current database schema is as follows: item_names: id | name | description | picture | common(BOOL) items: id | item_name_id | picture | price | description | picture item_synonyms: id | item_name_id | name | error(BOOL) Notes: error indicates a wrong spelling (eg. "Ericson"). description and picture of the item_names table are "globals" that can optionally be overridden by "local" description and picture fields of the items table (in case the store owner wants to supply a different picture for an item). common helps separate unique item names ("Jimmy Joe's Cheese Pizza" from "Cheese Pizza") I think the bright side of this schema is: Optimized searching & Handling Synonyms: I can query the item_names & item_synonyms tables using name LIKE %QUERY% and obtain the list of item_name_ids that need to be joined with the items table. (Examples of synonyms: "Sony Ericsson", "Sony Ericson", "X10", "X 10") Autocompletion: Again, a simple query to the item_names table. I can avoid the usage of DISTINCT and it minimizes number of variations ("Sony Ericsson Xperia™ X10", "Sony Ericsson - Xperia X10", "Xperia X10, Sony Ericsson") The down side would be: Overhead: When inserting an item, I query item_names to see if this name already exists. If not, I create a new entry. When deleting an item, I count the number of entries with the same name. If this is the only item with that name, I delete the entry from the item_names table (just to keep things clean; accounts for possible erroneous submissions). And updating is the combination of both. Weird Item Names: Store owners sometimes use sentences like "Harry Potter 1, 2 Books + CDs + Magic Hat". There's something off about having so much overhead to accommodate cases like this. This would perhaps be the prime reason I'm tempted to go for a schema like this: items: id | name | picture | price | description | picture (... with item_names and item_synonyms as utility tables that I could query) Is there a better schema you would suggested? Should item names be normalized for autocomplete? Is this probably what Facebook does for "School", "City" entries? Is the first schema or the second better/optimal for search? Thanks in advance! References: (1) Is normalizing a person's name going too far?, (2) Avoiding DISTINCT

    Read the article

  • Dynamic Multiple Choice (Like a Wizard) - How would you design it? (e.g. Schema, AI model, etc.)

    - by henry74
    This question can probably be broken up into multiple questions, but here goes... In essence, I'd like to allow users to type in what they would like to do and provide a wizard-like interface to ask for information which is missing to complete a requested query. For example, let's say a user types: "What is the weather like in Springfield?" We recognize the user is interested in weather, but it could be Springfield, Il or Springfield in another state. A follow-up question would be: What Springfield did you want weather for? 1 - Springfield, Il 2 - Springfield, Wi You can probably think of a million examples where a request is missing key data or its ambiguous. Make the assumption the gist of what the user wants can be understood, but there are missing pieces of data required to complete the request. Perhaps you can take it as far back as asking what the user wants to do and "leading" them to a query. This is not AI in the sense of taking any input and truly understanding it. I'm not referring to having some way to hold a conversation with a user. It's about inferring what a user wants, checking to see if there is an applicable service to be provided, identifying the inputs needed and overlaying that on top of what's missing from the request, then asking the user for the remaining information. That's it! :-) How would you want to store the information about services? How would you go about determining what was missing from the input data? My thoughts: Use regex expressions to identify clear pieces of information. These will be matched to the parameters of a service. Figure out which parameters do not have matching data and look up the associated question for those parameters. Ask those questions and capture answers. Re-run the service passing in the newly captured data. These would be more free-form questions. For multiple choice, identify the ambiguity and search for potential matches ranked in order of likelihood (add in user history/preferences to help decide). Provide the top 3 as choices. Thoughts appreciated. Cheers, Henry

    Read the article

  • Multiple SiteMap: entries in robots.txt?

    - by user306942
    I have been searching around using Google but I can't find an answer to this question. A robots.txt file can contain the following line: Sitemap: http://www.mysite.com/sitemapindex.xml but is it possible to specify MULTIPLE sitemap index files in the robots.txt and have the search engines recognize that and crawl ALL of the sitemaps referenced in each sitemap index file? For example, will this work: Sitemap: http://www.mysite.com/sitemapindex1.xml Sitemap: http://www.mysite.com/sitemapindex2.xml Sitemap: http://www.mysite.com/sitemapindex3.xml

    Read the article

  • Where to store site settings: DB? XML? CONFIG? CLASS FILES?

    - by Emin
    I am re-building a news portal of which already have a large number of visits every day. One of the major concerns when re-building this site was to maximize performance and speed. Having said this, we have done many things from caching, to all sort of other measures to ensure speed. Now towards the end of the project, I am having a dilemma of where to store my site settings that would least affect performance. The site settings will include things such as: Domain, DefaultImgPath, Google Analytics code, default emails of editors as well as more dynamic design/display feature settings such as the background color of specific DIVs and default color for links etc.. As far as I know, I have 4 choices in storing all these info. Database: Storing general settings in the DB and caching them may be a solution however, I want to limit the access to the database for only necessary and essential functions of the project which generally are insert/update/delete news items, author articles etc.. XML: I can store these settings in an XML file but I have not done this sort of thing before so I don't know what kind of problems -if any- I might face in the future. CONFIG: I can also store these settings in web.config CLASS FILE: I can hard code all these settings in a SiteSettings class, but since the site admin himself will be able to edit these settings, It may not be the best solution. Currently, I am more close to choosing web.config but letting people fiddle with it too often is something I do not want. E.g. if somehow, I miss out a validation for something and it breaks the web.config, the whole site will go down. My concern basically is that, I cannot forsee any possible consequences of using any of the methods above (or is there any other?), I was hoping to get this question over to more experienced people out here who hopefully help make my decision.

    Read the article

  • How can i find a file in system from flex

    - by praveen
    Hi, I am loading data from one sample.xml file using http service. the xml file will generated by jsp and it is saving in one proper location like(d:/programfiles/some.xml).now when I first time login to application i need to check whether that xml file is present or not. How can I check? Please help me in this it will very help full for me.

    Read the article

  • Netcat / Apache solution to send data to PHP

    - by ToughPal
    Hi, I have this device that sends XML data to a webserver in format as follows POST HTTP/1.1 Content-Type:text/xml Content-Length:369 Followed by XML The problem here is that apache simply sends 400 error for this and does not work. Is there anyway to create netcat to read xml and send to php? Any other solution welcome! Is it better to run php to listen to Port? in that case will multiple requests at the same time work?

    Read the article

  • @ in p4 filename

    - by Denise
    I'd like to script p4 a little. Unfortunately, some of the filenames that we're tracking have "@" in the filename. The filenames are in the form a@b.xml. If I try to do something like p4 sync a\@b.xml on a mac (or p4 sync a@b.xml on windows) it gives the error: Invalid changelist/client/label/date '@b.xml' Is there another way to escape it that perforce will recognize?

    Read the article

  • Serialization Error:Unable to generate a temporary class (result=1).\r\nerror CS0030:- c#

    - by ltech
    Running XSD.exe on my xml to generate C# class. All works well except on this property public DocumentATTRIBUTES[][] Document { get { return this.documentField; } set { this.documentField = value; } } I want to try and use CollectionBase, and this was my attempt public DocumentATTRIBUTESCollection Document { get { return this.documentField; } set { this.documentField = value; } } /// <remarks/> [System.SerializableAttribute()] [System.Diagnostics.DebuggerStepThroughAttribute()] [System.ComponentModel.DesignerCategoryAttribute("code")] [System.Xml.Serialization.XmlTypeAttribute(AnonymousType = true)] public partial class DocumentATTRIBUTES { private string _author; private string _maxVersions; private string _summary; /// <remarks/> [System.Xml.Serialization.XmlElementAttribute(Form = System.Xml.Schema.XmlSchemaForm.Unqualified)] public string author { get { return _author; } set { _author = value; } } /// <remarks/> [System.Xml.Serialization.XmlElementAttribute(Form = System.Xml.Schema.XmlSchemaForm.Unqualified)] public string max_versions { get { return _maxVersions; } set { _maxVersions = value; } } /// <remarks/> [System.Xml.Serialization.XmlElementAttribute(Form = System.Xml.Schema.XmlSchemaForm.Unqualified)] public string summary { get { return _summary; } set { _summary = value; } } } public class DocumentAttributeCollection : System.Collections.CollectionBase { public DocumentAttributeCollection() : base() { } public DocumentATTRIBUTES this[int index] { get { return (DocumentATTRIBUTES)this.InnerList[index]; } } public void Insert(int index, DocumentATTRIBUTES value) { this.InnerList.Insert(index, value); } public int Add(DocumentATTRIBUTES value) { return (this.InnerList.Add(value)); } } However when I try to serialize my object using XmlSerializer serializer = new XmlSerializer(typeof(DocumentMetaData)); I get the error: {"Unable to generate a temporary class (result=1).\r\nerror CS0030: Cannot convert type 'DocumentATTRIBUTES' to 'DocumentAttributeCollection'\r\nerror CS1502: The best overloaded method match for 'DocumentAttributeCollection.Add(DocumentATTRIBUTES)' has some invalid arguments\r\nerror CS1503: Argument '1': cannot convert from 'DocumentAttributeCollection' to 'DocumentATTRIBUTES'\r\n"} the XSD pertaining to this property is <xs:complexType> <xs:sequence> <xs:element name="ATTRIBUTES" minOccurs="0" maxOccurs="unbounded"> <xs:complexType> <xs:sequence> <xs:element name="author" type="xs:string" minOccurs="0" /> <xs:element name="max_versions" type="xs:string" minOccurs="0" /> <xs:element name="summary" type="xs:string" minOccurs="0" /> </xs:sequence> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element>

    Read the article

  • Confuse about which object should i assign the function to

    - by CliffC
    i have the following two class which convert object into xml string should i do something like class Person { public string GetXml() { //return a xml string } } or it is better to create another class which accept the person as a parameter and convert it into XML something like class PersonSerializer { public string Serialize(Person person) { // return a xml string } } Thanks

    Read the article

  • create Html anchor to file on the c drive

    - by GigaPr
    H could you tell me how to create a link to a file on the c drive(local machine) or a link to download a file from the hard drive this doesn t seems to work <a href="C:/Documents and Settings/Giga/My Documents/NetBeansProjects/JavaRssFeed/RssFeed/build/web/WEB-INF/Xml/Gaetano Feed.xml" class="font18">C:/Documents and Settings/Giga/My Documents/NetBeansProjects/JavaRssFeed/RssFeed/build/web/WEB-INF/Xml/Gaetano Feed.xml</a> thanks

    Read the article

  • XSLT processing with Qt

    - by swegi
    Hi, I like to display some (X)HTML content in a Qt application using QtWebKit. The content should be generated from XML documents via XSLT. As I am new to Qt, my questions are as follows: 1) Can QtWebKit display XML documents with the xml-stylesheet element set? 2) Can Qt apply XSLT to an XML document and return the result as a string or write it to a file?

    Read the article

  • Parallelism in .NET – Part 5, Partitioning of Work

    - by Reed
    When parallelizing any routine, we start by decomposing the problem.  Once the problem is understood, we need to break our work into separate tasks, so each task can be run on a different processing element.  This process is called partitioning. Partitioning our tasks is a challenging feat.  There are opposing forces at work here: too many partitions adds overhead, too few partitions leaves processors idle.  Trying to work the perfect balance between the two extremes is the goal for which we should aim.  Luckily, the Task Parallel Library automatically handles much of this process.  However, there are situations where the default partitioning may not be appropriate, and knowledge of our routines may allow us to guide the framework to making better decisions. First off, I’d like to say that this is a more advanced topic.  It is perfectly acceptable to use the parallel constructs in the framework without considering the partitioning taking place.  The default behavior in the Task Parallel Library is very well-behaved, even for unusual work loads, and should rarely be adjusted.  I have found few situations where the default partitioning behavior in the TPL is not as good or better than my own hand-written partitioning routines, and recommend using the defaults unless there is a strong, measured, and profiled reason to avoid using them.  However, understanding partitioning, and how the TPL partitions your data, helps in understanding the proper usage of the TPL. I indirectly mentioned partitioning while discussing aggregation.  Typically, our systems will have a limited number of Processing Elements (PE), which is the terminology used for hardware capable of processing a stream of instructions.  For example, in a standard Intel i7 system, there are four processor cores, each of which has two potential hardware threads due to Hyperthreading.  This gives us a total of 8 PEs – theoretically, we can have up to eight operations occurring concurrently within our system. In order to fully exploit this power, we need to partition our work into Tasks.  A task is a simple set of instructions that can be run on a PE.  Ideally, we want to have at least one task per PE in the system, since fewer tasks means that some of our processing power will be sitting idle.  A naive implementation would be to just take our data, and partition it with one element in our collection being treated as one task.  When we loop through our collection in parallel, using this approach, we’d just process one item at a time, then reuse that thread to process the next, etc.  There’s a flaw in this approach, however.  It will tend to be slower than necessary, often slower than processing the data serially. The problem is that there is overhead associated with each task.  When we take a simple foreach loop body and implement it using the TPL, we add overhead.  First, we change the body from a simple statement to a delegate, which must be invoked.  In order to invoke the delegate on a separate thread, the delegate gets added to the ThreadPool’s current work queue, and the ThreadPool must pull this off the queue, assign it to a free thread, then execute it.  If our collection had one million elements, the overhead of trying to spawn one million tasks would destroy our performance. The answer, here, is to partition our collection into groups, and have each group of elements treated as a single task.  By adding a partitioning step, we can break our total work into small enough tasks to keep our processors busy, but large enough tasks to avoid overburdening the ThreadPool.  There are two clear, opposing goals here: Always try to keep each processor working, but also try to keep the individual partitions as large as possible. When using Parallel.For, the partitioning is always handled automatically.  At first, partitioning here seems simple.  A naive implementation would merely split the total element count up by the number of PEs in the system, and assign a chunk of data to each processor.  Many hand-written partitioning schemes work in this exactly manner.  This perfectly balanced, static partitioning scheme works very well if the amount of work is constant for each element.  However, this is rarely the case.  Often, the length of time required to process an element grows as we progress through the collection, especially if we’re doing numerical computations.  In this case, the first PEs will finish early, and sit idle waiting on the last chunks to finish.  Sometimes, work can decrease as we progress, since previous computations may be used to speed up later computations.  In this situation, the first chunks will be working far longer than the last chunks.  In order to balance the workload, many implementations create many small chunks, and reuse threads.  This adds overhead, but does provide better load balancing, which in turn improves performance. The Task Parallel Library handles this more elaborately.  Chunks are determined at runtime, and start small.  They grow slowly over time, getting larger and larger.  This tends to lead to a near optimum load balancing, even in odd cases such as increasing or decreasing workloads.  Parallel.ForEach is a bit more complicated, however. When working with a generic IEnumerable<T>, the number of items required for processing is not known in advance, and must be discovered at runtime.  In addition, since we don’t have direct access to each element, the scheduler must enumerate the collection to process it.  Since IEnumerable<T> is not thread safe, it must lock on elements as it enumerates, create temporary collections for each chunk to process, and schedule this out.  By default, it uses a partitioning method similar to the one described above.  We can see this directly by looking at the Visual Partitioning sample shipped by the Task Parallel Library team, and available as part of the Samples for Parallel Programming.  When we run the sample, with four cores and the default, Load Balancing partitioning scheme, we see this: The colored bands represent each processing core.  You can see that, when we started (at the top), we begin with very small bands of color.  As the routine progresses through the Parallel.ForEach, the chunks get larger and larger (seen by larger and larger stripes). Most of the time, this is fantastic behavior, and most likely will out perform any custom written partitioning.  However, if your routine is not scaling well, it may be due to a failure in the default partitioning to handle your specific case.  With prior knowledge about your work, it may be possible to partition data more meaningfully than the default Partitioner. There is the option to use an overload of Parallel.ForEach which takes a Partitioner<T> instance.  The Partitioner<T> class is an abstract class which allows for both static and dynamic partitioning.  By overriding Partitioner<T>.SupportsDynamicPartitions, you can specify whether a dynamic approach is available.  If not, your custom Partitioner<T> subclass would override GetPartitions(int), which returns a list of IEnumerator<T> instances.  These are then used by the Parallel class to split work up amongst processors.  When dynamic partitioning is available, GetDynamicPartitions() is used, which returns an IEnumerable<T> for each partition.  If you do decide to implement your own Partitioner<T>, keep in mind the goals and tradeoffs of different partitioning strategies, and design appropriately. The Samples for Parallel Programming project includes a ChunkPartitioner class in the ParallelExtensionsExtras project.  This provides example code for implementing your own, custom allocation strategies, including a static allocator of a given chunk size.  Although implementing your own Partitioner<T> is possible, as I mentioned above, this is rarely required or useful in practice.  The default behavior of the TPL is very good, often better than any hand written partitioning strategy.

    Read the article

  • TFS 2010 Basic Concepts

    - by jehan
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Here, I’m going to discuss some key Architectural changes and concepts that have taken place in TFS 2010 when compared to TFS 2008. In TFS 2010 Installation, First you need to do the Installation and then you have to configure the Installation Feature from the available features. This is bit similar to SharePoint Installation, where you will first do the Installation and then configure the SharePoint Farms. 1) Installation Features available in TFS2010: a) Basic: It is the most compact TFS installation possible. It will install and configure Source Control, Work Item tracking and Build Services only. (SharePoint and Reporting Integration will not be possible). b) Standard Single Server: This is suitable for Single Server deployment of TFS. It will install and configure Windows SharePoint Services for you and will use the default instance of SQL Server. c) Advanced: It is suitable, if you want use Remote Servers for SQL Server Databases, SharePoint Products and Technologies and SQL Server Reporting Services. d) Application Tier Only: If you want to configure high availability for Team Foundation Server in a Load Balanced Environment (NLB) or you want to move Team Foundation Server from one server to other or you want to restore TFS. e) Upgrade: If you want to upgrade from a prior version of TFS. Note: One more important thing to know here about  TFS 2010 Basic is that,  it can be installed on Client Operations Systems(Windows 7 and Windows Vista SP3), Where as  earlier you cannot Install previous version of TFS (2008 and 2005) on client OS. 2) Team Project Collections: Connect to TFS dialog box in TFS 2008:  In TFS 2008, the TFS Server contains a set of Team Projects and each project may or may not be independent of other projects and every checkin gets a ever increasing  changeset ID  irrespective of the team project in which it is checked in and the same applies to work items  also, who also gets unique Work Item Ids.The main problem with this approach was that there are certain things which were impossible to do; those were required as per the Application Development Process. a)      If something has gone wrong in one team project and now you want to restore it back to earlier state where it was working properly then it requires you to restore the Database of Team Foundation Server from the backup you have taken as per your Maintenance plans and because of this the other team projects may lose out on the work which is not backed up. b)       Your company had a merge with some other company and now you have two TFS servers. One TFS Server which you are working on and other TFS server which other company was working and now after the merge you want to integrate the team projects from two TFS servers into one, which is almost impossible to achieve in TFS 2008. Though you can create the Team Projects in one server manually (In Source Control) which you want to integrate from the other TFS Server, but will lose out on History of Change Sets and Work items and others which are very important. There were few more issues of this sort, which were difficult to resolve in TFS 2008. To resolve issues related to above kind of scenarios which were mainly related TFS Maintenance, Integration, migration and Security,  Microsoft has come up with Team Project Collections concept in TFS 2010.This concept is similar to SharePoint Site Collections and if you are familiar with SharePoint Architecture, then it will help you to understand TFS 2010 Architecture easily. Connect to TFS dialog box in TFS 2010: In above dialog box as you can see there are two Team Project Collections, each team project can contain any number of team projects as you can see on right side it shows the two Team Projects in Team Project Collection (Default Collection) which I have chosen. Note: You can connect to only one Team project Collection at a time using an instance of  TFS Team Explorer. How does it work? To introduce Team Project Collections, changes have been done in reorganization of TFS databases. TFS 2008 was composed of 5-7 databases partitioned by subsystem (each for Version Control, Work Item Tracking, Build, Integration, Project Management...) New TFS 2010 database architecture: TFS_Config: It’s the root database and it contains centralized TFS configuration data, including the list of all team projects exist in TFS server. TFS_Warehouse: The data warehouse contains all the reporting data of served by this server (farm). TFS_* : This contains individual team project collection data. This database contains all the operational data of team project collection regardless of subsystem.In additional to this, you will have databases for SharePoint and Report Server. 3) TFS Farms:  As TFS 2010 is more flexible to configure as multiple Application tiers and multiple Database tiers, so it will be more appropriate to call as TFS Farm if you going for multi server installation of TFS. NLB support for TFS application tiers – With TFS 2010: you can configure multiple TFS application tier machines to serve the same set of Team Project Collections. The primary purpose of NLB support is to enable a cleaner and more complete high availability than in TFS 2008. Even if any application tier in the farm fails then farm will automatically continue to work with hardly any indication to end users of a problem. SQL data tiers: With 2010 you can configure many SQL Servers. Each Database can be configured to be on any SQL Server because each Team Project Collection is an independent database. This feature can also be used to load balance databases across SQL Servers.These new capabilities will significantly change the way enterprises manage their TFS installations in the future. With Team Project Collections and TFS farms, you can create a single, arbitrarily large TFS installation. You can grow it incrementally by adding ATs and SQL Servers as needed.

    Read the article

  • SQL Server Management Data Warehouse - quick tour on setting health monitoring policies

    - by ssqa.net
    Profiler, Perfmon, DMVs & scripts are legendary tools for a DBA to monitor the SQL arena. In line with these tools SQL Server 2008 throws a powerful stream with policy based management (PBM) framework & management data warehouse (MDW) methods, which is a relational database that contains the data that is collected from a server that is a data collection target. This data is used to generate the reports for the System Data collection sets, and can also be used to create custom reports. .....(read more)

    Read the article

  • Stackify Aims to Put More ‘Dev’ in ‘DevOps’

    - by Matt Watson
    Originally published on VisualStudioMagazine.com on 8/22/2012 by Keith Ward.The Kansas City-based startup wants to make it easier for developers to examine the network stack and find problems in code.The first part of “DevOps” is “Dev”. But according to Matt Watson, Devs aren’t connected enough with Ops, and it’s time that changed.He founded the startup company Stackify earlier this year to do something about it. Stackify gives developers unprecedented access to the IT side of the equation, Watson says, without putting additional burden on the system and network administrators who ultimately ensure the health of the environment.“We need a product designed for developers, with the goal of getting them more involved in operations and app support. Now, there’s next to nothing designed for developers,” Watson says. Stackify allows developers to search the network stack to troubleshoot problems in their software that might otherwise take days of coordination between development and IT teams to solve.Stackify allows developers to search log files, configuration files, databases and other infrastructure to locate errors. A key to this is that the developers are normally granted read-only access, soothing admin fears that developers will upload bad code to their servers.Implementation starts with data collection on the servers. Among the information gleaned is application discovery, server monitoring, file access, and other data collection, according to Stackify’s Web site. Watson confirmed that Stackify works seamlessly with virtualized environments as well.Although the data collection software must be installed on Windows servers, it can monitor both Windows and Linux servers. Once collection’s finished, developers have the kind of information they need, without causing heartburn for the IT staff.Stackify is a 100 percent cloud-based service. The company uses Windows Azure for hosting, a decision Watson’s happy with. With Azure, he says, “It’s nice to have all the dev tools like cache and table storage.” Although there have been a few glitches here and there with the service, it’s run very smoothly for the most part, he adds.Stackify is currently in a closed beta, with a public release scheduled for October. Watson says that pricing is expected to be $25 per month, per server, with volume discounts available. He adds that the target audience is companies with at least five developers.Watson founded Stackify after selling his last company, VinSolutions, to AutoTrader.com for “close to $150 million”, according to press accounts. Watson has since  founded the Watson Technology Group, which focuses on angel investing.About the Author: Keith Ward is the editor in chief of Visual Studio Magazine.

    Read the article

  • Canonicalization of single, small pages like reviews or product categories [SEO]

    - by Valorized
    In general I pretty much like the idea of canonicalization. And in most cases, Google explains possible procedures in a clear way. For example: If I have duplicates because of parameters (eg: &sort=desc) it's clear to use the canonical for the site, provided the within the head-tag. However I'm wondering how to handle "small - no to say thin content - sites". What's my definition of a small site? An Example: On one of my main sites, we use a directory based url-structure. Let's see: example.com/ (root) example.com/category-abc/ example.com/category-abc/produkt-xy/ Moreover we provide on page, that includes all products example.com/all-categories/ (lists all products the same way as in the categories) In case of reviews, we use a similar structure: example.com/reviews/product-xy/ shows all review for one certain product example.com/reviews/product-xy/abc-your-product-is-great/ shows one certain review example.com/reviews/ shows all reviews for all products (latest first) Let's make it even more complicated: On every product site, there are the latest 2 reviews at the end of the page. So you see, a lot of potential duplicates. Q1: Should I create canonicals for a: example.com/category-abc/ to example.com/all-categories/ b: example.com/reviews/product-xy/abc-your-product-is-great/ to example.com/reviews/product-xy/ or to example.com/review/ or none of them? Q2: Can I link the collection of categories (all-categories/) and collection of all reviews (reviews/ and reviews/product-xy/) to the single category respectively to the single review. Example: example.com/reviews/ includes - let's say - 100 reviews. Can I somehow use a markup that tells search engines: "Hey, wait, you are now looking at a collection of 100 reviews - do not index this collection, you should rather prefer indexing every single review as a single page!". In HTML it might be something like that (which - of course - does not work, it's only to show you what I mean): <div class="review" rel="canonical" href="http://example.com/reviews/product-xz/abc-your-product-is-great/">HERE GOES THE REVIEW</div> Reason: I don't think it is a great user experience if the user searches for "your product is great" and lands on example.com/reviews/ instead of example.com/reviews/product-xy/abc-your-product-is-great/. On the first site, he will have to search and might stop because of frustration. The second result, however, might lead to a conversion. The same applies for categories. If the user is searching for category-Z, he might land on the all-categories page and he has to scroll down to the (last) category, to find what he searched for (Z). So what's best practice? What should I do? Thank you for your help!

    Read the article

  • Canonicalization of single, small pages like reviews or product categories

    - by Valorized
    In general I pretty much like the idea of canonicalization. And in most cases, Google explains possible procedures in a clear way. For example: If I have duplicates because of parameters (eg: &sort=desc) it's clear to use the canonical for the site, provided the within the head-tag. However I'm wondering how to handle "small - no to say thin content - sites". What's my definition of a small site? An Example: On one of my main sites, we use a directory based url-structure. Let's see: example.com/ (root) example.com/category-abc/ example.com/category-abc/produkt-xy/ Moreover we provide on page, that includes all products example.com/all-categories/ (lists all products the same way as in the categories) In case of reviews, we use a similar structure: example.com/reviews/product-xy/ shows all review for one certain product example.com/reviews/product-xy/abc-your-product-is-great/ shows one certain review example.com/reviews/ shows all reviews for all products (latest first) Let's make it even more complicated: On every product site, there are the latest 2 reviews at the end of the page. So you see, a lot of potential duplicates. Q1: Should I create canonicals for a: example.com/category-abc/ to example.com/all-categories/ b: example.com/reviews/product-xy/abc-your-product-is-great/ to example.com/reviews/product-xy/ or to example.com/review/ or none of them? Q2: Can I link the collection of categories (all-categories/) and collection of all reviews (reviews/ and reviews/product-xy/) to the single category respectively to the single review. Example: example.com/reviews/ includes - let's say - 100 reviews. Can I somehow use a markup that tells search engines: "Hey, wait, you are now looking at a collection of 100 reviews - do not index this collection, you should rather prefer indexing every single review as a single page!". In HTML it might be something like that (which - of course - does not work, it's only to show you what I mean): <div class="review" rel="canonical" href="http://example.com/reviews/product-xz/abc-your-product-is-great/"> HERE GOES THE REVIEW</div> Reason: I don't think it is a great user experience if the user searches for "your product is great" and lands on example.com/reviews/ instead of example.com/reviews/product-xy/abc-your-product-is-great/. On the first site, he will have to search and might stop because of frustration. The second result, however, might lead to a conversion. The same applies for categories. If the user is searching for category-Z, he might land on the all-categories page and he has to scroll down to the (last) category, to find what he searched for (Z). So what's best practice? What should I do?

    Read the article

  • Using the jQuery UI Library in a MVC 3 Application to Build a Dialog Form

    - by ChrisD
    Using a simulated dialog window is a nice way to handle inline data editing. The jQuery UI has a UI widget for a dialog window that makes it easy to get up and running with it in your application. With the release of ASP.NET MVC 3, Microsoft included the jQuery UI scripts and files in the MVC 3 project templates for Visual Studio. With the release of the MVC 3 Tools Update, Microsoft implemented the inclusion of those with NuGet as packages. That means we can get up and running using the latest version of the jQuery UI with minimal effort. To the code! Another that might interested you about JQuery Mobile and ASP.NET MVC 3 with C#. If you are starting with a new MVC 3 application and have the Tools Update then you are a NuGet update and a <link> and <script> tag away from adding the jQuery UI to your project. If you are using an existing MVC project you can still get the jQuery UI library added to your project via NuGet and then add the link and script tags. Assuming that you have pulled down the latest version (at the time of this publish it was 1.8.13) you can add the following link and script tags to your <head> tag: < link href = "@Url.Content(" ~ / Content / themes / base / jquery . ui . all . css ")" rel = "Stylesheet" type = "text/css" /> < script src = "@Url.Content(" ~ / Scripts / jquery-ui-1 . 8 . 13 . min . js ")" type = "text/javascript" ></ script > The jQuery UI library relies upon the CSS scripts and some image files to handle rendering of its widgets (you can choose a different theme or role your own if you like). Adding these to the stock _Layout.cshtml file results in the following markup: <!DOCTYPE html> < html > < head >     < meta charset = "utf-8" />     < title > @ViewBag.Title </ title >     < link href = "@Url.Content(" ~ / Content / Site . css ")" rel = "stylesheet" type = "text/css" />     <link href="@Url.Content("~/Content/themes/base/jquery.ui.all.css")" rel="Stylesheet" type="text/css" />     <script src="@Url.Content("~/Scripts/jquery-1.5.1.min.js")" type="text/javascript"></script>     <script src="@Url.Content("~/Scripts/modernizr-1.7.min . js ")" type = "text/javascript" ></ script >     < script src = "@Url.Content(" ~ / Scripts / jquery-ui-1 . 8 . 13 . min . js ")" type = "text/javascript" ></ script > </ head > < body >     @RenderBody() </ body > </ html > Our example will involve building a list of notes with an id, title and description. Each note can be edited and new notes can be added. The user will never have to leave the single page of notes to manage the note data. The add and edit forms will be delivered in a jQuery UI dialog widget and the note list content will get reloaded via an AJAX call after each change to the list. To begin, we need to craft a model and a data management class. We will do this so we can simulate data storage and get a feel for the workflow of the user experience. The first class named Note will have properties to represent our data model. namespace Website . Models {     public class Note     {         public int Id { get ; set ; }         public string Title { get ; set ; }         public string Body { get ; set ; }     } } The second class named NoteManager will be used to set up our simulated data storage and provide methods for querying and updating the data. We will take a look at the class content as a whole and then walk through each method after. using System . Collections . ObjectModel ; using System . Linq ; using System . Web ; namespace Website . Models {     public class NoteManager     {         public Collection < Note > Notes         {             get             {                 if ( HttpRuntime . Cache [ "Notes" ] == null )                     this . loadInitialData ();                 return ( Collection < Note >) HttpRuntime . Cache [ "Notes" ];             }         }         private void loadInitialData ()         {             var notes = new Collection < Note >();             notes . Add ( new Note                           {                               Id = 1 ,                               Title = "Set DVR for Sunday" ,                               Body = "Don't forget to record Game of Thrones!"                           });             notes . Add ( new Note                           {                               Id = 2 ,                               Title = "Read MVC article" ,                               Body = "Check out the new iwantmymvc.com post"                           });             notes . Add ( new Note                           {                               Id = 3 ,                               Title = "Pick up kid" ,                               Body = "Daughter out of school at 1:30pm on Thursday. Don't forget!"                           });             notes . Add ( new Note                           {                               Id = 4 ,                               Title = "Paint" ,                               Body = "Finish the 2nd coat in the bathroom"                           });             HttpRuntime . Cache [ "Notes" ] = notes ;         }         public Collection < Note > GetAll ()         {             return Notes ;         }         public Note GetById ( int id )         {             return Notes . Where ( i => i . Id == id ). FirstOrDefault ();         }         public int Save ( Note item )         {             if ( item . Id <= 0 )                 return saveAsNew ( item );             var existingNote = Notes . Where ( i => i . Id == item . Id ). FirstOrDefault ();             existingNote . Title = item . Title ;             existingNote . Body = item . Body ;             return existingNote . Id ;         }         private int saveAsNew ( Note item )         {             item . Id = Notes . Count + 1 ;             Notes . Add ( item );             return item . Id ;         }     } } The class has a property named Notes that is read only and handles instantiating a collection of Note objects in the runtime cache if it doesn't exist, and then returns the collection from the cache. This property is there to give us a simulated storage so that we didn't have to add a full blown database (beyond the scope of this post). The private method loadInitialData handles pre-filling the collection of Note objects with some initial data and stuffs them into the cache. Both of these chunks of code would be refactored out with a move to a real means of data storage. The GetAll and GetById methods access our simulated data storage to return all of our notes or a specific note by id. The Save method takes in a Note object, checks to see if it has an Id less than or equal to zero (we assume that an Id that is not greater than zero represents a note that is new) and if so, calls the private method saveAsNew . If the Note item sent in has an Id , the code finds that Note in the simulated storage, updates the Title and Description , and returns the Id value. The saveAsNew method sets the Id , adds it to the simulated storage, and returns the Id value. The increment of the Id is simulated here by getting the current count of the note collection and adding 1 to it. The setting of the Id is the only other chunk of code that would be refactored out when moving to a different data storage approach. With our model and data manager code in place we can turn our attention to the controller and views. We can do all of our work in a single controller. If we use a HomeController , we can add an action method named Index that will return our main view. An action method named List will get all of our Note objects from our manager and return a partial view. We will use some jQuery to make an AJAX call to that action method and update our main view with the partial view content returned. Since the jQuery AJAX call will cache the call to the content in Internet Explorer by default (a setting in jQuery), we will decorate the List, Create and Edit action methods with the OutputCache attribute and a duration of 0. This will send the no-cache flag back in the header of the content to the browser and jQuery will pick that up and not cache the AJAX call. The Create action method instantiates a new Note model object and returns a partial view, specifying the NoteForm.cshtml view file and passing in the model. The NoteForm view is used for the add and edit functionality. The Edit action method takes in the Id of the note to be edited, loads the Note model object based on that Id , and does the same return of the partial view as the Create method. The Save method takes in the posted Note object and sends it to the manager to save. It is decorated with the HttpPost attribute to ensure that it will only be available via a POST. It returns a Json object with a property named Success that can be used by the UX to verify everything went well (we won't use that in our example). Both the add and edit actions in the UX will post to the Save action method, allowing us to reduce the amount of unique jQuery we need to write in our view. The contents of the HomeController.cs file: using System . Web . Mvc ; using Website . Models ; namespace Website . Controllers {     public class HomeController : Controller     {         public ActionResult Index ()         {             return View ();         }         [ OutputCache ( Duration = 0 )]         public ActionResult List ()         {             var manager = new NoteManager ();             var model = manager . GetAll ();             return PartialView ( model );         }         [ OutputCache ( Duration = 0 )]         public ActionResult Create ()         {             var model = new Note ();             return PartialView ( "NoteForm" , model );         }         [ OutputCache ( Duration = 0 )]         public ActionResult Edit ( int id )         {             var manager = new NoteManager ();             var model = manager . GetById ( id );             return PartialView ( "NoteForm" , model );         }         [ HttpPost ]         public JsonResult Save ( Note note )         {             var manager = new NoteManager ();             var noteId = manager . Save ( note );             return Json ( new { Success = noteId > 0 });         }     } } The view for the note form, NoteForm.cshtml , looks like so: @model Website . Models . Note @using ( Html . BeginForm ( "Save" , "Home" , FormMethod . Post , new { id = "NoteForm" })) { @Html . Hidden ( "Id" ) < label class = "Title" >     < span > Title < /span><br / >     @Html . TextBox ( "Title" ) < /label> <label class="Body">     <span>Body</ span >< br />     @Html . TextArea ( "Body" ) < /label> } It is a strongly typed view for our Note model class. We give the <form> element an id attribute so that we can reference it via jQuery. The <label> and <span> tags give our UX some structure that we can style with some CSS. The List.cshtml view is used to render out a <ul> element with all of our notes. @model IEnumerable < Website . Models . Note > < ul class = "NotesList" >     @foreach ( var note in Model )     {     < li >         @note . Title < br />         @note . Body < br />         < span class = "EditLink ButtonLink" noteid = "@note.Id" > Edit < /span>     </ li >     } < /ul> This view is strongly typed as well. It includes a <span> tag that we will use as an edit button. We add a custom attribute named noteid to the <span> tag that we can use in our jQuery to identify the Id of the note object we want to edit. The view, Index.cshtml , contains a bit of html block structure and all of our jQuery logic code. @ {     ViewBag . Title = "Index" ; } < h2 > Notes < /h2> <div id="NoteListBlock"></ div > < span class = "AddLink ButtonLink" > Add New Note < /span> <div id="NoteDialog" title="" class="Hidden"></ div > < script type = "text/javascript" >     $ ( function () {         $ ( "#NoteDialog" ). dialog ({             autoOpen : false , width : 400 , height : 330 , modal : true ,             buttons : {                 "Save" : function () {                     $ . post ( "/Home/Save" ,                         $ ( "#NoteForm" ). serialize (),                         function () {                             $ ( "#NoteDialog" ). dialog ( "close" );                             LoadList ();                         });                 },                 Cancel : function () { $ ( this ). dialog ( "close" ); }             }         });         $ ( ".EditLink" ). live ( "click" , function () {             var id = $ ( this ). attr ( "noteid" );             $ ( "#NoteDialog" ). html ( "" )                 . dialog ( "option" , "title" , "Edit Note" )                 . load ( "/Home/Edit/" + id , function () { $ ( "#NoteDialog" ). dialog ( "open" ); });         });         $ ( ".AddLink" ). click ( function () {             $ ( "#NoteDialog" ). html ( "" )                 . dialog ( "option" , "title" , "Add Note" )                 . load ( "/Home/Create" , function () { $ ( "#NoteDialog" ). dialog ( "open" ); });         });         LoadList ();     });     function LoadList () {         $ ( "#NoteListBlock" ). load ( "/Home/List" );     } < /script> The <div> tag with the id attribute of "NoteListBlock" is used as a container target for the load of the partial view content of our List action method. It starts out empty and will get loaded with content via jQuery once the DOM is loaded. The <div> tag with the id attribute of "NoteDialog" is the element for our dialog widget. The jQuery UI library will use the title attribute for the text in the dialog widget top header bar. We start out with it empty here and will dynamically change the text via jQuery based on the request to either add or edit a note. This <div> tag is given a CSS class named "Hidden" that will set the display:none style on the element. Since our call to the jQuery UI method to make the element a dialog widget will occur in the jQuery document ready code block, the end user will see the <div> element rendered in their browser as the page renders and then it will hide after that jQuery call. Adding the display:hidden to the <div> element via CSS will ensure that it is never rendered until the user triggers the request to open the dialog. The jQuery document load block contains the setup for the dialog node, click event bindings for the edit and add links, and a call to a JavaScript function called LoadList that handles the AJAX call to the List action method. The .dialog() method is called on the "NoteDialog" <div> element and the options are set for the dialog widget. The buttons option defines 2 buttons and their click actions. The first is the "Save" button (the text in quotations is used as the text for the button) that will do an AJAX post to our Save action method and send the serialized form data from the note form (targeted with the id attribute "NoteForm"). Upon completion it will close the dialog widget and call the LoadList to update the UX without a redirect. The "Cancel" button simply closes the dialog widget. The .live() method handles binding a function to the "click" event on all elements with the CSS class named EditLink . We use the .live() method because it will catch and bind our function to elements even as the DOM changes. Since we will be constantly changing the note list as we add and edit we want to ensure that the edit links get wired up with click events. The function for the click event on the edit links gets the noteid attribute and stores it in a local variable. Then it clears out the HTML in the dialog element (to ensure a fresh start), calls the .dialog() method and sets the "title" option (this sets the title attribute value), and then calls the .load() AJAX method to hit our Edit action method and inject the returned content into the "NoteDialog" <div> element. Once the .load() method is complete it opens the dialog widget. The click event binding for the add link is similar to the edit, only we don't need to get the id value and we load the Create action method. This binding is done via the .click() method because it will only be bound on the initial load of the page. The add button will always exist. Finally, we toss in some CSS in the Content/Site.css file to style our form and the add/edit links. . ButtonLink { color : Blue ; cursor : pointer ; } . ButtonLink : hover { text - decoration : underline ; } . Hidden { display : none ; } #NoteForm label { display:block; margin-bottom:6px; } #NoteForm label > span { font-weight:bold; } #NoteForm input[type=text] { width:350px; } #NoteForm textarea { width:350px; height:80px; } With all of our code in place we can do an F5 and see our list of notes: If we click on an edit link we will get the dialog widget with the correct note data loaded: And if we click on the add new note link we will get the dialog widget with the empty form: The end result of our solution tree for our sample:

    Read the article

  • mongoDB Management Studio

    - by Liam McLennan
    This weekend I have been in Sydney at the MS Web Camp, learning about web application development. At the end of the first day we came up with application ideas and pitched them. My idea was to build a web management application for mongoDB. mongoDB I pitched my idea, put down the microphone, and then someone asked, “what’s mongo?”. Good question. MongoDB is a document database that stores JSON style documents. This is a JSON document for a tweet from twitter: db.tweets.find()[0] { "_id" : ObjectId("4bfe4946cfbfb01420000001"), "created_at" : "Thu, 27 May 2010 10:25:46 +0000", "profile_image_url" : "http://a3.twimg.com/profile_images/600304197/Snapshot_2009-07-26_13-12-43_normal.jpg", "from_user" : "drearyclocks", "text" : "Does anyone know who has better coverage, Optus or Vodafone? Telstra is still too expensive.", "to_user_id" : null, "metadata" : { "result_type" : "recent" }, "id" : { "floatApprox" : 14825648892 }, "geo" : null, "from_user_id" : 6825770, "search_term" : "telstra", "iso_language_code" : "en", "source" : "&lt;a href=&quot;http://www.tweetdeck.com&quot; rel=&quot;nofollow&quot;&gt;TweetDeck&lt;/a&gt;" } A mongodb server can have many databases, each database has many collections (instead of tables) and a collection has many documents (instead of rows). Development Day 2 of the Sydney MS Web Camp was allocated to building our applications. First thing in the morning I identified the stories that I wanted to implement: Scenario: View databases Scenario: View Collections in a database Scenario: View Documents in a Collection Scenario: Delete a Collection Scenario: Delete a Database Scenario: Delete Documents Over the course of the day the team (3.5 developers) implemented all of the planned stories (except ‘delete a database’) and also implemented the following: Scenario: Create Database Scenario: Create Collection Lessons Learned I’m new to MongoDB and in the past I have only accessed it from Ruby (for my hare-brained scheme). When it came to implementing our MongoDB management studio we discovered that their is no official MongoDB driver for .NET. We chose to use NoRM, honestly just because it was the only one I had heard of. NoRM was a challenge. I think it is a fine library but it is focused on mapping strongly typed objects to MongoDB. For our application we had no prior knowledge of the types that would be in the MongoDB database so NoRM was probably a poor choice. Here are some screens (click to enlarge):

    Read the article

  • Using Query Classes With NHibernate

    - by Liam McLennan
    Even when using an ORM, such as NHibernate, the developer still has to decide how to perform queries. The simplest strategy is to get access to an ISession and directly perform a query whenever you need data. The problem is that doing so spreads query logic throughout the entire application – a clear violation of the Single Responsibility Principle. A more advanced strategy is to use Eric Evan’s Repository pattern, thus isolating all query logic within the repository classes. I prefer to use Query Classes. Every query needed by the application is represented by a query class, aka a specification. To perform a query I: Instantiate a new instance of the required query class, providing any data that it needs Pass the instantiated query class to an extension method on NHibernate’s ISession type. To query my database for all people over the age of sixteen looks like this: [Test] public void QueryBySpecification() { var canDriveSpecification = new PeopleOverAgeSpecification(16); var allPeopleOfDrivingAge = session.QueryBySpecification(canDriveSpecification); } To be able to query for people over a certain age I had to create a suitable query class: public class PeopleOverAgeSpecification : Specification<Person> { private readonly int age; public PeopleOverAgeSpecification(int age) { this.age = age; } public override IQueryable<Person> Reduce(IQueryable<Person> collection) { return collection.Where(person => person.Age > age); } public override IQueryable<Person> Sort(IQueryable<Person> collection) { return collection.OrderBy(person => person.Name); } } Finally, the extension method to add QueryBySpecification to ISession: public static class SessionExtensions { public static IEnumerable<T> QueryBySpecification<T>(this ISession session, Specification<T> specification) { return specification.Fetch( specification.Sort( specification.Reduce(session.Query<T>()) ) ); } } The inspiration for this style of data access came from Ayende’s post Do You Need a Framework?. I am sick of working through multiple layers of abstraction that don’t do anything. Have you ever seen code that required a service layer to call a method on a repository, that delegated to a common repository base class that wrapped and ORMs unit of work? I can achieve the same thing with NHibernate’s ISession and a single extension method. If you’re interested you can get the full Query Classes example source from Github.

    Read the article

  • Denali CTP3 - Semantic Search 2 (Lots of documents)

    - by sqlartist
    Hi again, I thought I would improve on the previous post by actually putting a decent about of content into the Filetable - this time I used the opensource DMOZ Health document repository which contains 5,880 files inside 220 folders. The files are all html and are pretty small in size. The entire document collection is about 120Mb unzipped and 30Mb zipped. If any one is interested in testing this collection drop me a note and I will upload the dmoz_health repository archive to Skydrive. This time...(read more)

    Read the article

< Previous Page | 299 300 301 302 303 304 305 306 307 308 309 310  | Next Page >