Search Results

Search found 29203 results on 1169 pages for 'state machine workflow'.

Page 268/1169 | < Previous Page | 264 265 266 267 268 269 270 271 272 273 274 275  | Next Page >

  • Portal And Content - Content Integration - Best Practices

    - by Stefan Krantz
    Lately we have seen an increase in projects that have failed to either get user friendly content integration or non satisfactory performance. Our intention is to mitigate any knowledge gap that our previous post might have left you with, therefore this post will repeat some recommendation or reference back to old useful post. Moreover this post will help you understand ground up how to design, architect and implement business enabled, responsive and performing portals with complex requirements on business centric information publishing. Design the Information Model The key to successful portal deployments is Information modeling, it's a key task to understand the use case you designing for, therefore I have designed a set of question you need to ask yourself or your customer: Question: Who will own the content, IT or Business? Answer: BusinessQuestion: Who will publish the content, IT or Business? Answer: BusinessQuestion: Will there be multiple publishers? Answer: YesQuestion: Are the publishers computer scientist?Answer: NoQuestion: How often do the information changes, daily, weekly, monthly?Answer: Daily, weekly If your answers to the questions matches at least 2, we strongly recommend you design your content with following principles: Divide your pages in to logical sections, where each section is marked with its purpose Assign capabilities to each section, does it contain text, images, formatting and/or is it static and is populated through other contextual information Select editor/design element type WYSIWYG - Rich Text Plain Text - non-format text Image - Image object Static List - static list of formatted informationDynamic Data List - assembled information from multiple data files through CMIS query The result of such design map could look like following below examples: Based on the outcome of the required elements in the design column 3 from the left you will now simply design a data model in WebCenter Content - Site Studio by creating a Region Definition structure matching your design requirements.For more information on how to create a Region definition see following post: Region Definition Post - note see instruction 7 for details. Each region definition can now be used to instantiate data files, a data file will hold the actual data for each element in the region definition. Another way you can see this is to compare the region definition as an extension to the metadata model in WebCenter Content for each data file item. Design content templates With a solid dependable information model we can now proceed to template creation and page design, in this phase focuses on how to place the content sections from the region definition on the page via a Content Presenter template. Remember by creating content presenter templates you will leverage the latest and most integrated technology WebCenter has to offer. This phase is much easier since the you already have the information model and design wire-frames to base the logic on, however there is still few considerations to pay attention to: Base the template on ADF and make only necessary exceptions to markup when required Leverage ADF design components for Tabs, Accordions and other similar components, this way the design in the content published areas will comply with other design areas based on custom ADF taskflows There is no performance impact when using meta data or region definition based data All data access regardless of type, metadata or xml data it can be accessed via the Content Presenter - Node. See below for applied examples on how to access data Access metadata property from Document - #{node.propertyMap['myProp'].value}myProp in this example can be for instance (dDocName, dDocTitle, xComments or any other available metadata) Access element data from data file xml - #{node.propertyMap['[Region Definition Name]:[Element name]'].asTextHtml}Region Definition Name is the expect region definition that the current data file is instantiatingElement name is the element value you like to grab from the data file I recommend you read following  useful post on content template topic:CMIS queries and template creation - note see instruction 9 for detailsStatic List template rendering For more information on templates:Single Item Content TemplateMulti Item Content TemplateExpression Language Internationalization Considerations When integrating content assets via content presenter you by now probably understand that the content item/data file is wired to the page, what is also pretty common at this stage is that the content item/data file only support one language since its not practical or business friendly to mix that into a complex structure. Therefore you will be left with a very common dilemma that you will have to either build a complete new portal for each locale, which is not an good option! However with little bit of information modeling and clear naming convention this can be addressed. Basically you can simply make sure that all content item/data file are named with a predictable naming convention like "Content1_EN" for the English rendition and "Content1_ES" for the Spanish rendition. This way through simple none complex customizations you will be able to dynamically switch the actual content item/data file just before rendering. By following proposed approach above you not only enable a simple mechanism for internationalized content you also preserve the functionality in the content presenter to support business accessible run-time publishing of information on existing and new pages. I recommend you read following useful post on Internationalization topics:Internationalize with Content Presenter Integrate with Review & Approval processes Today the Review and approval functionality and configuration is based out of WebCenter Content - Criteria Workflows. Criteria Workflows uses the metadata of the checked in document to evaluate if the document is under any review/approval process. So for instance if a Criteria Workflow is configured to force any documents with Version = "2" or "higher" and Content Type is "Instructions", any matching content item version on check in will now enter the workflow before getting released for general access. Few things to consider when configuring Criteria Workflows: Make sure to not trigger on version one for Content Items that are Data Files - if you trigger on version 1 you will not only approve an empty document you will also have a content presenter pointing to a none existing document - since the document will only be available after successful completion of the workflow Approval workflows sometimes requires more complex criteria, the recommendation if that is the case is that the meta data triggering such criteria is automatically populated, this can be achieved through many approaches including Content Profiles Criteria workflows are configured and managed in WebCenter Content Administration Applets where you can configure one or more workflows. When you configured Criteria workflows the Content Presenter will support the editors with the approval process directly inline in the "Contribution mode" of the portal. In addition to approve/reject and details of the task, the content presenter natively support the user to view the current and future version of the change he/she is approving. See below for example: Architectural recommendation To support review&approval processes - minimize the amount of data files per page Each CMIS query can consume significant time depending on the complexity of the query - minimize the amount of CMIS queries per page Use Content Presenter Templates based on ADF - this way you minimize the design considerations and optimize the usage of caching Implement the page in as few Data files as possible - simplifies publishing process, increases performance and simplifies release process Named data file (node) or list of named nodes when integrating to pages increases performance vs. querying for data Named data file (node) or list of named nodes when integrating to pages enables business centric page creation and publishing and reduces the need for IT department interaction Summary Just because one architectural decision solves a business problem it doesn't mean its the right one, when designing portals all architecture has to be in harmony and not impacting each other. For instance the most technical complex solution is not always the best since it will most likely defeat the business accessibility, performance or both, therefore the best approach is to first design for simplicity that even a non-technical user can operate, after that consider the performance impact and final look at the technology challenges these brings and workaround them first with out-of-the-box features, after that design and develop functions to complement the short comings.

    Read the article

  • Problem restoring a SQL Server backup

    - by Elmex
    I have a SQL Server 2008, which is part of a domain. Now I make a backup of a database of this server and restore it on a SQL Server, which is not part of a domain. I have an C# application, which uses this database. On the NON-Domain machine I get now exceptions like this: "Cannot execute as the database prinzipal because the principial "dbo" does not exist, this type of principal cannot be impersonatedm or you don not have the permission" I think, the problem is, that the database owner is a domain user and this user doesn't exist on the target machine (backup machine)!? How can I solve this ?

    Read the article

  • Feature (de)activation error “The web or site was not found” and Application Pool

    - by panjkov
    I am using Microsoft IW Demo VM (2010-10A) for my experiments related to SharePoint, in all cases when I don’t have time (read: when I’m lazy) to create complete SharePoint Dev environment. Problem This particular time I was playing around with site-scoped features and newly created site collection. So here is my workflow: Create feature with feature receiver Deploy to Site Collection from Visual Studio using “No Activation” deployment profile Activate feature from “Site Collection Features” interface...(read more)

    Read the article

  • PayPal IPN - having trouble accessing session data?

    - by Martin Bean
    Hello, all. I'm having issues with PayPal IPN integration where it seems I cannot get my solution to read session variables. Basically, in my shop module script, I store the customer's details as provided by PayPal to an orders table. However, I also wish to save products ordered in a transaction to a separate table linked by the order ID. However, it's the second part of the script that's not working, where I loop through the products in the session and then save them to the orders_products table. Is there a reason why the session data not being read? The code within shop.php is as follows: if ($paypal->validate_ipn()) { $name = $paypal->ipn_data['address_name']; $street_1 = $paypal->ipn_data['address_street']; $street_2 = ""; $city = $paypal->ipn_data['address_city']; $state = $paypal->ipn_data['address_state']; $zip = $paypal->ipn_data['address_zip']; $country = $paypal->ipn_data['address_country']; $txn_id = $paypal->ipn_data['txn_id']; $sql = "INSERT INTO orders (name, street_1, street_2, city, state, zip, country, txn_id) VALUES (:name, :street_1, :street_2, :city, :state, :zip, :country, :txn_id)"; $smt = $this->pdo->prepare($sql); $smt->bindParam(':name', $name, PDO::PARAM_STR); $smt->bindParam(':street_1', $street_1, PDO::PARAM_STR); $smt->bindParam(':street_2', $street_2, PDO::PARAM_STR); $smt->bindParam(':city', $city, PDO::PARAM_STR); $smt->bindParam(':state', $state, PDO::PARAM_STR); $smt->bindParam(':zip', $zip, PDO::PARAM_STR); $smt->bindParam(':country', $country, PDO::PARAM_STR); $smt->bindParam(':txn_id', $txn_id, PDO::PARAM_INT); $smt->execute(); // save products to orders relationship $order_id = $this->pdo->lastInsertId(); // $cart = $this->session->get('cart'); $cart = $this->session->get('cart'); foreach ($cart as $product_id => $item) { $quantity = $item['quantity']; $sql = "INSERT INTO orders_products (order_id, product_id, quantity) VALUES ('$order_id', '$product_id', '$quantity')"; $res = $this->pdo->query($sql); } $this->session->del('cart'); mail('[email protected]', 'IPN result', 'IPN was successful on wrestling-wear.com'); } else { mail('[email protected]', 'IPN result', 'IPN failed on wrestling-wear.com'); } And I'm using the PayPal IPN class for PHP as found here: http://www.micahcarrick.com/04-19-2005/php-paypal-ipn-integration-class.html, but the contents of the validate_ipn() method is as follows: public function validate_ipn() { $url_parsed = parse_url($this->paypal_url); $post_string = ''; foreach ($_POST as $field => $value) { $this->ipn_data[$field] = $value; $post_string.= $field.'='.urlencode(stripslashes($value)).'&'; } $post_string.= "cmd=_notify-validate"; // append IPN command // open the connection to PayPal $fp = fsockopen($url_parsed[host], "80", $err_num, $err_str, 30); if (!$fp) { // could not open the connection. If logging is on, the error message will be in the log $this->last_error = "fsockopen error no. $errnum: $errstr"; $this->log_ipn_results(false); return false; } else { // post the data back to PayPal fputs($fp, "POST $url_parsed[path] HTTP/1.1\r\n"); fputs($fp, "Host: $url_parsed[host]\r\n"); fputs($fp, "Content-type: application/x-www-form-urlencoded\r\n"); fputs($fp, "Content-length: ".strlen($post_string)."\r\n"); fputs($fp, "Connection: close\r\n\r\n"); fputs($fp, $post_string . "\r\n\r\n"); // loop through the response from the server and append to variable while (!feof($fp)) { $this->ipn_response.= fgets($fp, 1024); } fclose($fp); // close connection } if (eregi("VERIFIED", $this->ipn_response)) { // valid IPN transaction $this->log_ipn_results(true); return true; } else { // invalid IPN transaction; check the log for details $this->last_error = 'IPN Validation Failed.'; $this->log_ipn_results(false); return false; } }

    Read the article

  • How to find Microsoft.SharePoint.ApplicationPages.dll and some other assemblies

    - by KunaalKapoor
    You may be wondering where to find Microsoft.SharePoint.ApplicationPages.dll , if you are creating a new SharePoint application page? But don’t worry, it resides in _app_bin folder of your SharePoint site’s virtual directory.Assuming your IIS inetpub is at C then the exact path of Microsoft.SharePoint.ApplicationPages.dll isC:\Inetpub\wwwroot\wss\VirtualDirectories\<Your Virtual Server>\_app_bin\Microsoft.SharePoint.ApplicationPages.dllHere is the full list of assemblies at _app_bin folder:Microsoft.Office.DocumentManagement.Pages.dllMicrosoft.Office.officialfileSoap.dllMicrosoft.Office.Policy.Pages.dllMicrosoft.Office.SlideLibrarySoap.dllMicrosoft.Office.Workflow.Pages.dllMicrosoft.Office.WorkflowSoap.dllMicrosoft.SharePoint.ApplicationPages.dllSTSSOAP.DLL

    Read the article

  • Linq-to-sql Compiled Query is caching result from disposed DataContext

    - by Vladimir Kojic
    Compiled query: public static Func<OperationalDataContext, short, Machine> QueryMachineById = CompiledQuery.Compile((OperationalDataContext db, short machineID) => db.Machines.Where(m => m.MachineID == machineID).SingleOrDefault()); It looks like compiled query is caching Machine object and returning the same object even if query is called from new DataContext (I’m disposing DataContext in the service but I’m getting Machine from previous DataContext). I use POCOs and XML mapping. Getting cached object from the same datacontext is ok, but when I call query with new DataContext I don’t want to get object from old datacontext. Is there something that I don’t do right ? Thanks, Vladimir

    Read the article

  • 6 Ways to Free Up Hard Drive Space Used by Windows System Files

    - by Chris Hoffman
    We’ve previously covered the standard ways to free up space on Windows. But if you have a small solid-state drive and really want more hard space, there are geekier ways to reclaim hard drive space. Not all of these tips are recommended — in fact, if you have more than enough hard drive space, following these tips may actually be a bad idea. There’s a tradeoff to changing all of these settings. Erase Windows Update Uninstall Files Windows allows you to uninstall patches you install from Windows Update. This is helpful if an update ever causes a problem — but how often do you need to uninstall an update, anyway? And will you really ever need to uninstall updates you’ve installed several years ago? These uninstall files are probably just wasting space on your hard drive. A recent update released for Windows 7 allows you to erase Windows Update files from the Windows Disk Cleanup tool. Open Disk Cleanup, click Clean up system files, check the Windows Update Cleanup option, and click OK. If you don’t see this option, run Windows Update and install the available updates. Remove the Recovery Partition Windows computers generally come with recovery partitions that allow you to reset your computer back to its factory default state without juggling discs. The recovery partition allows you to reinstall Windows or use the Refresh and Reset your PC features. These partitions take up a lot of space as they need to contain a complete system image. On Microsoft’s Surface Pro, the recovery partition takes up about 8-10 GB. On other computers, it may be even larger as it needs to contain all the bloatware the manufacturer included. Windows 8 makes it easy to copy the recovery partition to removable media and remove it from your hard drive. If you do this, you’ll need to insert the removable media whenever you want to refresh or reset your PC. On older Windows 7 computers, you could delete the recovery partition using a partition manager — but ensure you have recovery media ready if you ever need to install Windows. If you prefer to install Windows from scratch instead of using your manufacturer’s recovery partition, you can just insert a standard Window disc if you ever want to reinstall Windows. Disable the Hibernation File Windows creates a hidden hibernation file at C:\hiberfil.sys. Whenever you hibernate the computer, Windows saves the contents of your RAM to the hibernation file and shuts down the computer. When it boots up again, it reads the contents of the file into memory and restores your computer to the state it was in. As this file needs to contain much of the contents of your RAM, it’s 75% of the size of your installed RAM. If you have 12 GB of memory, that means this file takes about 9 GB of space. On a laptop, you probably don’t want to disable hibernation. However, if you have a desktop with a small solid-state drive, you may want to disable hibernation to recover the space. When you disable hibernation, Windows will delete the hibernation file. You can’t move this file off the system drive, as it needs to be on C:\ so Windows can read it at boot. Note that this file and the paging file are marked as “protected operating system files” and aren’t visible by default. Shrink the Paging File The Windows paging file, also known as the page file, is a file Windows uses if your computer’s available RAM ever fills up. Windows will then “page out” data to disk, ensuring there’s always available memory for applications — even if there isn’t enough physical RAM. The paging file is located at C:\pagefile.sys by default. You can shrink it or disable it if you’re really crunched for space, but we don’t recommend disabling it as that can cause problems if your computer ever needs some paging space. On our computer with 12 GB of RAM, the paging file takes up 12 GB of hard drive space by default. If you have a lot of RAM, you can certainly decrease the size — we’d probably be fine with 2 GB or even less. However, this depends on the programs you use and how much memory they require. The paging file can also be moved to another drive — for example, you could move it from a small SSD to a slower, larger hard drive. It will be slower if Windows ever needs to use the paging file, but it won’t use important SSD space. Configure System Restore Windows seems to use about 10 GB of hard drive space for “System Protection” by default. This space is used for System Restore snapshots, allowing you to restore previous versions of system files if you ever run into a system problem. If you need to free up space, you could reduce the amount of space allocated to system restore or even disable it entirely. Of course, if you disable it entirely, you’ll be unable to use system restore if you ever need it. You’d have to reinstall Windows, perform a Refresh or Reset, or fix any problems manually. Tweak Your Windows Installer Disc Want to really start stripping down Windows, ripping out components that are installed by default? You can do this with a tool designed for modifying Windows installer discs, such as WinReducer for Windows 8 or RT Se7en Lite for Windows 7. These tools allow you to create a customized installation disc, slipstreaming in updates and configuring default options. You can also use them to remove components from the Windows disc, shrinking the size of the resulting Windows installation. This isn’t recommended as you could cause problems with your Windows installation by removing important features. But it’s certainly an option if you want to make Windows as tiny as possible. Most Windows users can benefit from removing Windows Update uninstallation files, so it’s good to see that Microsoft finally gave Windows 7 users the ability to quickly and easily erase these files. However, if you have more than enough hard drive space, you should probably leave well enough alone and let Windows manage the rest of these settings on its own. Image Credit: Yutaka Tsutano on Flickr     

    Read the article

  • VS2008 Setup Project for C# Project

    - by xopht
    I've built app using wmp.dll which is Windows System File in my XP machine. If I tried add outputs of above project to my Setup Project, VS warned that ''wmp.dll' should be excluded because its source file 'C:\WINDOWS\system32\wmp.dll' is under Windows System File Protection'. There're three things under 'Detected Dependencies' folder. Microsoft .NET Framework, Interop.WMPLib.dll and wmp.dll. The app works okay in my machine, of course. But if I install this into Windows Server 2003 machine, the app does not be launched. I think this is because different version of OS use different version of wmp.dll. Anyway, how can I fix this? ps. I've even excluded wmp.dll from the Setup Project.

    Read the article

  • Silverlight WCF service acting strange

    - by Caleb Sandfort
    Hi I have a silverlight project that calls into a wcf service. Everything works fine on my local machine. However when I deploy to a virtual machine, with the exact same query the wcf service returns, but the result is empty. I've tried debugging, but have not been able to get it to break in the wcf service. Any ideas what the problem could be, or how I could go about debugging it? Thanks I figured out what the problem is, but am not sure what the solution is. In my silverlight project the wcf service I am referencing is http://localhost/.../SilverlightApiService.svc I used fiddler on my vm to see the request that was made and instead of trying to contact the above service, it was trying to contact: http:///.../SilverlightApiService.svc So, for some reason my machine name is getting inserted in there instead of localhost. Any thoughts on this would be appreciated.

    Read the article

  • What does your Python development workbench look like?

    - by Fabian Fagerholm
    First, a scene-setter to this question: Several questions on this site have to do with selection and comparison of Python IDEs. (The top one currently is What IDE to use for Python). In the answers you can see that many Python programmers use simple text editors, many use sophisticated text editors, and many use a variety of what I would call "actual" integrated development environments – a single program in which all development is done: managing project files, interfacing with a version control system, writing code, refactoring code, making build configurations, writing and executing tests, "drawing" GUIs, and so on. Through its GUI, an IDE supports different kinds of workflows to accomplish different tasks during the journey of writing a program or making changes to an existing one. The exact features vary, but a good IDE has sensible workflows and automates things to let the programmer concentrate on the creative parts of writing software. The non-IDE way of writing large programs relies on a collection of tools that are typically single-purpose; they do "one thing well" as per the Unix philosophy. This "non-integrated development environment" can be thought of as a workbench, supported by the OS and generic interaction through a text or graphical shell. The programmer creates workflows in their mind (or in a wiki?), automates parts and builds a personal workbench, often gradually and as experience accumulates. The learning curve is often steeper than with an IDE, but those who have taken the time to do this can often claim deeper understanding of their tools. (Whether they are better programmers is not part of this question.) With advanced editor-platforms like Emacs, the pieces can be integrated into a whole, while with simpler editors like gedit or TextMate, the shell/terminal is typically the "command center" to drive the workbench. Sometimes people extend an existing IDE to suit their needs. What does your Python development workbench look like? What workflows have you developed and how do they work? For the first question, please give the main "driving" program – the one that you use to control the rest (Emacs, shell, etc.) the "small tools" -- the programs you reach for when doing different tasks For the second question, please describe what the goal of the workflow is (eg. "set up a new project" or "doing initial code design" or "adding a feature" or "executing tests") what steps are in the workflow and what commands you run for each step (eg. in the shell or in Emacs) Also, please describe the context of your work: do you write small one-off scripts, do you do web development (with what framework?), do you write data-munching applications (what kind of data and for what purpose), do you do scientific computing, desktop apps, or something else? Note: A good answer addresses the perspectives above – it doesn't just list a bunch of tools. It will typically be a long answer, not a short one, and will take some thinking to produce; maybe even observing yourself working.

    Read the article

  • C#: Does an IDisposable in a Halted Iterator Dispose?

    - by James Michael Hare
    If that sounds confusing, let me give you an example. Let's say you expose a method to read a database of products, and instead of returning a List<Product> you return an IEnumerable<Product> in iterator form (yield return). This accomplishes several good things: The IDataReader is not passed out of the Data Access Layer which prevents abstraction leak and resource leak potentials. You don't need to construct a full List<Product> in memory (which could be very big) if you just want to forward iterate once. If you only want to consume up to a certain point in the list, you won't incur the database cost of looking up the other items. This could give us an example like: 1: // a sample data access object class to do standard CRUD operations. 2: public class ProductDao 3: { 4: private DbProviderFactory _factory = SqlClientFactory.Instance 5:  6: // a method that would retrieve all available products 7: public IEnumerable<Product> GetAvailableProducts() 8: { 9: // must create the connection 10: using (var con = _factory.CreateConnection()) 11: { 12: con.ConnectionString = _productsConnectionString; 13: con.Open(); 14:  15: // create the command 16: using (var cmd = _factory.CreateCommand()) 17: { 18: cmd.Connection = con; 19: cmd.CommandText = _getAllProductsStoredProc; 20: cmd.CommandType = CommandType.StoredProcedure; 21:  22: // get a reader and pass back all results 23: using (var reader = cmd.ExecuteReader()) 24: { 25: while(reader.Read()) 26: { 27: yield return new Product 28: { 29: Name = reader["product_name"].ToString(), 30: ... 31: }; 32: } 33: } 34: } 35: } 36: } 37: } The database details themselves are irrelevant. I will say, though, that I'm a big fan of using the System.Data.Common classes instead of your provider specific counterparts directly (SqlCommand, OracleCommand, etc). This lets you mock your data sources easily in unit testing and also allows you to swap out your provider in one line of code. In fact, one of the shared components I'm most proud of implementing was our group's DatabaseUtility library that simplifies all the database access above into one line of code in a thread-safe and provider-neutral way. I went with my own flavor instead of the EL due to the fact I didn't want to force internal company consumers to use the EL if they didn't want to, and it made it easy to allow them to mock their database for unit testing by providing a MockCommand, MockConnection, etc that followed the System.Data.Common model. One of these days I'll blog on that if anyone's interested. Regardless, you often have situations like the above where you are consuming and iterating through a resource that must be closed once you are finished iterating. For the reasons stated above, I didn't want to return IDataReader (that would force them to remember to Dispose it), and I didn't want to return List<Product> (that would force them to hold all products in memory) -- but the first time I wrote this, I was worried. What if you never consume the last item and exit the loop? Are the reader, command, and connection all disposed correctly? Of course, I was 99.999999% sure the creators of C# had already thought of this and taken care of it, but inspection in Reflector was difficult due to the nature of the state machines yield return generates, so I decided to try a quick example program to verify whether or not Dispose() will be called when an iterator is broken from outside the iterator itself -- i.e. before the iterator reports there are no more items. So I wrote a quick Sequencer class with a Dispose() method and an iterator for it. Yes, it is COMPLETELY contrived: 1: // A disposable sequence of int -- yes this is completely contrived... 2: internal class Sequencer : IDisposable 3: { 4: private int _i = 0; 5: private readonly object _mutex = new object(); 6:  7: // Constructs an int sequence. 8: public Sequencer(int start) 9: { 10: _i = start; 11: } 12:  13: // Gets the next integer 14: public int GetNext() 15: { 16: lock (_mutex) 17: { 18: return _i++; 19: } 20: } 21:  22: // Dispose the sequence of integers. 23: public void Dispose() 24: { 25: // force output immediately (flush the buffer) 26: Console.WriteLine("Disposed with last sequence number of {0}!", _i); 27: Console.Out.Flush(); 28: } 29: } And then I created a generator (infinite-loop iterator) that did the using block for auto-Disposal: 1: // simply defines an extension method off of an int to start a sequence 2: public static class SequencerExtensions 3: { 4: // generates an infinite sequence starting at the specified number 5: public static IEnumerable<int> GetSequence(this int starter) 6: { 7: // note the using here, will call Dispose() when block terminated. 8: using (var seq = new Sequencer(starter)) 9: { 10: // infinite loop on this generator, means must be bounded by caller! 11: while(true) 12: { 13: yield return seq.GetNext(); 14: } 15: } 16: } 17: } This is really the same conundrum as the database problem originally posed. Here we are using iteration (yield return) over a large collection (infinite sequence of integers). If we cut the sequence short by breaking iteration, will that using block exit and hence, Dispose be called? Well, let's see: 1: // The test program class 2: public class IteratorTest 3: { 4: // The main test method. 5: public static void Main() 6: { 7: Console.WriteLine("Going to consume 10 of infinite items"); 8: Console.Out.Flush(); 9:  10: foreach(var i in 0.GetSequence()) 11: { 12: // could use TakeWhile, but wanted to output right at break... 13: if(i >= 10) 14: { 15: Console.WriteLine("Breaking now!"); 16: Console.Out.Flush(); 17: break; 18: } 19:  20: Console.WriteLine(i); 21: Console.Out.Flush(); 22: } 23:  24: Console.WriteLine("Done with loop."); 25: Console.Out.Flush(); 26: } 27: } So, what do we see? Do we see the "Disposed" message from our dispose, or did the Dispose get skipped because from an "eyeball" perspective we should be locked in that infinite generator loop? Here's the results: 1: Going to consume 10 of infinite items 2: 0 3: 1 4: 2 5: 3 6: 4 7: 5 8: 6 9: 7 10: 8 11: 9 12: Breaking now! 13: Disposed with last sequence number of 11! 14: Done with loop. Yes indeed, when we break the loop, the state machine that C# generates for yield iterate exits the iteration through the using blocks and auto-disposes the IDisposable correctly. I must admit, though, the first time I wrote one, I began to wonder and that led to this test. If you've never seen iterators before (I wrote a previous entry here) the infinite loop may throw you, but you have to keep in mind it is not a linear piece of code, that every time you hit a "yield return" it cedes control back to the state machine generated for the iterator. And this state machine, I'm happy to say, is smart enough to clean up the using blocks correctly. I suspected those wily guys and gals at Microsoft engineered it well, and I wasn't disappointed. But, I've been bitten by assumptions before, so it's good to test and see. Yes, maybe you knew it would or figured it would, but isn't it nice to know? And as those campy 80s G.I. Joe cartoon public service reminders always taught us, "Knowing is half the battle...". Technorati Tags: C#,.NET

    Read the article

  • Visual Studio Debugging Issue

    - by Aaron M
    Seeing an issue when debugging in Visual Studio. All of the values under watch, and in the hover over window show up incorrectly. the only values that show properly, are values that are local to the method I am currently stepping through. For example the watch value for 'this' when debugging shows the following under value 0x00000000ffac0388 { btnBack=0x00000000ffaccf20 btnReply=0x00000000ffacd200 btnForward=0x00000000ffacd420...} some other variables show this, even though the variable is there. error: 'this.foo' does not exist The machine recently had windows 7 64 installed, since then this problem has occured. Visual studio has been reinstalled on this machine, and we verified that the settings in visual studio were exactly the same as a different PC that is the same machine and config.

    Read the article

  • Eclipse + Git: Can't add files to my repository

    - by Stegeman
    I use Eclipse (Helios) with PDT and egit. I have a project without versioning, so I created a git repository for it by doing: Team -> Share Project When I try to add the files of my project to the repository: Team -> Add I get an exception: Failed to add resource to index Failed to add resource to index Exception caught during execution of add command When I add the files manually on the command line, everything is working fine. Any ideas? EDIT: The error eclipse gives is: Caused by: org.eclipse.jgit.errors.ObjectWritingException: Unable to create new object: Z:\eage_layout\.git\objects\60\f30dd232bd6ddaeb198fb11400c2613a072189 at org.eclipse.jgit.storage.file.ObjectDirectoryInserter.insert(ObjectDirectoryInserter.java:100) at org.eclipse.jgit.api.AddCommand.call(AddCommand.java:177) The code I'm running is located on a virtual machine running on CentOs. I'm working on a windows machine and using a samba share to get access to the code on the virtual machine. I've put the filesystem permissions on my .git directory to 777, but still it does not work.

    Read the article

  • SQL CLR not properly enabling

    - by dnolan
    We have a SQL server running SQL 2005 Workgroup 64 bit (9.0.4273), on Windows 2003 server 64 bit. We have run sp_configure and reconfigured the server which indicates that the clr is now enabled. exec sp_configure 'clr enabled', '1' go reconfigure go However, when trying to call CREATE ASSEMBLY the server completely dies on us and we have to do a full reboot of the machine. A little more diagnostic information, even though clr enabled is set to 1 and we have rebooted the full server, running the following statement select * from sys.dm_clr_properties returns directory version state locked CLR version with mscoree which is what it says when the CLR is not enabled on another machine. On a correctly enabled machine (after reboot) this function reads directory C:\Windows\Microsoft.NET\Framework64\v2.0.50727\ version v2.0.50727 state CLR is initialized

    Read the article

  • How to Create Custom SharePoint Workflows in Visual Studio 2008

    Whereas simple workflows are possible using Microsoft Office SharePoint Designer, you will soon reach the point where you will need to use Visual Studio. In the third article in Charles' introduction to Workflows in Sharepoint, he demonstrates how to create a workflow from scratch using Visual Studio, and discusses the relative merits of the two tools for this sort of development work.

    Read the article

  • Visual Studio 2010 and .NET Framework 4 Training Kit April 2010 Release

    - by Harish Pavithran
    The Visual Studio 2010 and .NET Framework 4 Training Kit includes presentations, hands-on labs, and demos. This content is designed to help you learn how to utilize the Visual Studio 2010 features and a variety of framework technologies including: C# 4 Visual Basic 10 F# Parallel Extensions Windows Communication Foundation Windows Workflow Windows Presentation Foundation ASP.NET 4 Windows 7 Entity Framework ADO.NET Data Services Managed Extensibility Framework Visual Studio Team System This version of the Training Kit works with Visual Studio 2010 and .NET Framework 4.  Here is the link enjoy www.microsoft.com/downloads/details.aspx

    Read the article

  • Migrating from VisualSVN on windows to linux based svn

    - by Jonathan
    I'd like to migrate my svn repository from my local computer running windows and VisualSVN 2.1.2 to an svn app on webfaction (my Linux hosting solution). Initially I tried dumping the svn: svnadmin dump *path_to_repository* *dumpfile_name* and loading it on the Linux machine svnadmin load *dumpfile_name* I received the following error: svnadmin: Can't open file '*dumpfile_path_and_name*/format': Not a directory I found that on my Windows machine I do have a format folder under the repository. So I copied the entire repository to the Linux machine and tried: svnadmin load *path_to_repository_copy* I received the following error: svnadmin: Expected FS format between '1' and '3'; found format '4' what should I do?

    Read the article

  • Porting .NET 3.5 app to Linux

    - by gregarobinson
    Started doing research on porting a .NET 3.5 Windows Service application to Linux last week and today.There is a WinForms maintenance\admin tool that needs to be ported too. Looking at Mono and related friends. Need to come up with a way to support MSMQ and Windows Workflow Foundation. Looks like the support for this in Mono is in alpha and not stable yet.   

    Read the article

  • "The specified table does not exist" - for the administrator it does! Vista only issue

    - by Simon Nunn
    Hello, I've got a weird bug occurring in a compact database on a Vista deployment machine. Basically the sdf file seem to be schizophrenic. The client application get the entitled error when running as a user but not when I use run as administrator. I don't see this problem on my XP development machine. I installed management studio onto the deployment machine and opened two versions of the application, one as user and one as administrator. When I query: SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES I see 21 tables on the one and 26 on the other, and the administrator is seeing less tables. It turns out that the user version, with 26 rows, is a previous incarnation of this database. Any ideas on why this is happening?

    Read the article

  • K2 4.5 Quick Thoughts

    I just finished attending a webcast on K2 4.5 and I thought Id share a few quick thoughts. Power User Story Improved Given it is just a presentation and I havent actually played with it, the story seemed improved and more believable that real world power users would be able to define workflows in SharePoint.  Power users who would be comfortable with Excel functions may be able to do some more worthwhile workflows since there is new support for inline functions and conditions.  The new SilverLight K2 designer seems pretty user friendly, though the dialog windows can really stack up which may get confusing.  I thought the neatest part was that the workflow can be defined just by starting with a SharePoint Lists settings which may be okay for some organizations and simpler workflows that dont need to define the workflow and push it through lots of testing in different environments.  The standalone K2 Studio is back.  In K2 2003 it was required because Visual Studio integration didnt exist.  Its back now for use by power users who need functionality up to the point of code.  Not sure if this Administration/Installation Installation is supposed to be simplified, with unattended install and other details I didnt catch.  Install and configuration has always seemed daunting to me so anything to improve that is good.  Related to that there is a new tool that is meant to help diagnose issues in your installation.  That may include figuring out missing permissions or services that arent running.  Also, now all K2 SharePoint features deployed as solutions. Dynamic SQL Service Broker Create a smart object to go against a table that you created, NOT the SmartBox.  This seems promising and something that maybe should have been there all along. Reference Event Allows you to call functionally that youve referenced, in the sample showing it was calling a web service that was referenced.  It seemed odd because it was really like writing code using dialogs (call constructor, set timeout, call web service method).  Seemed a little odd to me. Help We were reminded that help.k2.com site is newish site that is supposed to be the MSDN of K2 for partners and customers. VS 2010 Support Still no hard date on this, but what we were told is approximately 90 days after VS 2010 is officially released.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Microsoft.Biztalk.explorerom.dll reference in asp.net application resulting system.nullreferenceexce

    - by sheetal.oza
    Hi, I have a asp.net application in order to start/stop applications and ports of Biztalk server 2006 r2. I have used "Microsoft.Biztalk.explorerom.dll (C:/Program Files/Biztalk Server 2006/Developer tool) " to achieve this. This is working fine on development machine since biz talk server is installed on local machine. But in the production environment (asp.net web server ,windows 2003 and iis 6.0)...this give System.nullreferenceexception (object reference not set..) at BtsCatalogExplorer explorer = (BtsCatalogExplorer)myGroup.CreateInstance(typeof(BtsCatalogExplorer)) my biztalk server and sql server are on two different box. In my setup (asp.net web application)..adding Microsoft.Biztalk.explorerom.dll and Microsoft.Biztalk.Applicationdeployment.engine.dll to GAC. But still no luck. Do i need to install biz talk server on my local machine even though I am connecting to different biz talk server?? And help is appreciated...

    Read the article

  • MSDN Webcast: BenkoTips Live and On Demand: Visual Studio 2010 and SharePoint Server 2010

    See how Microsoft Visual Studio 2010 tools work with Microsoft SharePoint Server 2010. From workflow to custom lists to creating a visual Web part, we show you how to take advantage of the tools for building on SharePoint....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Access DotNetPanel/WebsitePanel Enterprise Server from outside the loopback address

    - by Ryan French
    Hi Peoples, I am currently investigating integration of DotNetPanel/WebsitePanel with a website that is running ColdFusion. The issue I have come across is that it seems WebsitePanel only allows access to the Enterprise Server (where the web services are that I need access to) when the request is coming from the same physical machine (i.e. on the loop-back address 127.0.0.1). This unfortunately doesnt really work out too well with our current setup as we have our ColdFusion server running on one machine and WebsitePanel running on another. Does anyone have any suggestions as to how we can change this? Currently we are investigating writing a transparent proxy server that will sit on the portal machine and pass requests between the two sites, but that is less than ideal.

    Read the article

  • Application deployment problem

    - by Indranil Mutsuddy
    Hello Everyone, I developed an application using VS 2008 and MS Access2007 and it works fine. Now have to make a setup of it(this is my first project). I gone through many tutorials about deployment, I tried VS 2008 setup and deployment, but after installation it only runs in my machine and not in others..sometimes it shows error(The 'Microsoft.ACE.OLEDB.12.0' provider is not registered on the local machine(that machine had both VS2008 and MS Access installed)). I been a week since, i tried what i can and still trying, cant believe that i am strucked here, nothing seems to work. Please help... The link below is my project, so if any of you could spare a little time to check project. 2_GameOnStart.html"http://www.4shared.com/file/7G14MULL/2_GameOnStart.html Thanking You all in advance. Regards Indranil

    Read the article

  • Testing Workflows &ndash; Test-After

    - by Timothy Klenke
    Originally posted on: http://geekswithblogs.net/TimothyK/archive/2014/05/30/testing-workflows-ndash-test-after.aspxIn this post I’m going to outline a few common methods that can be used to increase the coverage of of your test suite.  This won’t be yet another post on why you should be doing testing; there are plenty of those types of posts already out there.  Assuming you know you should be testing, then comes the problem of how do I actual fit that into my day job.  When the opportunity to automate testing comes do you take it, or do you even recognize it? There are a lot of ways (workflows) to go about creating automated tests, just like there are many workflows to writing a program.  When writing a program you can do it from a top-down approach where you write the main skeleton of the algorithm and call out to dummy stub functions, or a bottom-up approach where the low level functionality is fully implement before it is quickly wired together at the end.  Both approaches are perfectly valid under certain contexts. Each approach you are skilled at applying is another tool in your tool belt.  The more vectors of attack you have on a problem – the better.  So here is a short, incomplete list of some of the workflows that can be applied to increasing the amount of automation in your testing and level of quality in general.  Think of each workflow as an opportunity that is available for you to take. Test workflows basically fall into 2 categories:  test first or test after.  Test first is the best approach.  However, this post isn’t about the one and only best approach.  I want to focus more on the lesser known, less ideal approaches that still provide an opportunity for adding tests.  In this post I’ll enumerate some test-after workflows.  In my next post I’ll cover test-first. Bug Reporting When someone calls you up or forwards you a email with a vague description of a bug its usually standard procedure to create or verify a reproduction plan for the bug via manual testing and log that in a bug tracking system.  This can be problematic.  Often reproduction plans when written down might skip a step that seemed obvious to the tester at the time or they might be missing some crucial environment setting. Instead of data entry into a bug tracking system, try opening up the test project and adding a failing unit test to prove the bug.  The test project guarantees that all aspects of the environment are setup properly and no steps are missing.  The language in the test project is much more precise than the English that goes into a bug tracking system. This workflow can easily be extended for Enhancement Requests as well as Bug Reporting. Exploratory Testing Exploratory testing comes in when you aren’t sure how the system will behave in a new scenario.  The scenario wasn’t planned for in the initial system requirements and there isn’t an existing test for it.  By definition the system behaviour is “undefined”. So write a new unit test to define that behaviour.  Add assertions to the tests to confirm your assumptions.  The new test becomes part of the living system specification that is kept up to date with the test suite. Examples This workflow is especially good when developing APIs.  When you are finally done your production API then comes the job of writing documentation on how to consume the API.  Good documentation will also include code examples.  Don’t let these code examples merely exist in some accompanying manual; implement them in a test suite. Example tests and documentation do not have to be created after the production API is complete.  It is best to write the example code (tests) as you go just before the production code. Smoke Tests Every system has a typical use case.  This represents the basic, core functionality of the system.  If this fails after an upgrade the end users will be hosed and they will be scratching their heads as to how it could be possible that an update got released with this core functionality broken. The tests for this core functionality are referred to as “smoke tests”.  It is a good idea to have them automated and run with each build in order to avoid extreme embarrassment and angry customers. Coverage Analysis Code coverage analysis is a tool that reports how much of the production code base is exercised by the test suite.  In Visual Studio this can be found under the Test main menu item. The tool will report a total number for the code coverage, which can be anywhere between 0 and 100%.  Coverage Analysis shouldn’t be used strictly for numbers reporting.  Companies shouldn’t set minimum coverage targets that mandate that all projects must have at least 80% or 100% test coverage.  These arbitrary requirements just invite gaming of the coverage analysis, which makes the numbers useless. The analysis tool will break down the coverage by the various classes and methods in projects.  Instead of focusing on the total number, drill down into this view and see which classes have high or low coverage.  It you are surprised by a low number on a class this is an opportunity to add tests. When drilling through the classes there will be generally two types of reaction to a surprising low test coverage number.  The first reaction type is a recognition that there is low hanging fruit to be picked.  There may be some classes or methods that aren’t being tested, which could easy be.  The other reaction type is “OMG”.  This were you find a critical piece of code that isn’t under test.  In both cases, go and add the missing tests. Test Refactoring The general theme of this post up to this point has been how to add more and more tests to a test suite.  I’ll step back from that a bit and remind that every line of code is a liability.  Each line of code has to be read and maintained, which costs money.  This is true regardless whether the code is production code or test code. Remember that the primary goal of the test suite is that it be easy to read so that people can easily determine the specifications of the system.  Make sure that adding more and more tests doesn’t interfere with this primary goal. Perform code reviews on the test suite as often as on production code.  Hold the test code up to the same high readability standards as the production code.  If the tests are hard to read then change them.  Look to remove duplication.  Duplicate setup code between two or more test methods that can be moved to a shared function.  Entire test methods can be removed if it is found that the scenario it tests is covered by other tests.  Its OK to delete a test that isn’t pulling its own weight anymore. Remember to only start refactoring when all the test are green.  Don’t refactor the tests and the production code at the same time.  An automated test suite can be thought of as a double entry book keeping system.  The unchanging, passing production code serves as the tests for the test suite while refactoring the tests. As with all refactoring, it is best to fit this into your regular work rather than asking for time later to get it done.  Fit this into the standard red-green-refactor cycle.  The refactor step no only applies to production code but also the tests, but not at the same time.  Perhaps the cycle should be called red-green-refactor production-refactor tests (not quite as catchy).   That about covers most of the test-after workflows I can think of.  In my next post I’ll get into test-first workflows.

    Read the article

< Previous Page | 264 265 266 267 268 269 270 271 272 273 274 275  | Next Page >