Search Results

Search found 34513 results on 1381 pages for 'end task'.

Page 859/1381 | < Previous Page | 855 856 857 858 859 860 861 862 863 864 865 866  | Next Page >

  • Figuring out the IIS Version for a given OS in .NET Code

    - by Rick Strahl
    Here's an odd requirement: I need to figure out what version of IIS is available on a given machine in order to take specific configuration actions when installing an IIS based application. I build several configuration tools for application configuration and installation and depending on which version of IIS is available on IIS different configuration paths are taken. For example, when dealing with XP machine you can't set up an Application Pool for an application because XP (IIS 5.1) didn't support Application pools. Configuring 32 and 64 bit settings are easy in IIS 7 but this didn't work in prior versions and so on. Along the same lines I saw a question on the AspInsiders list today, regarding a similar issue where somebody needed to know the IIS version as part of an ASP.NET application prior to when the Request object is available. So it's useful to know which version of IIS you can possibly expect. This should be easy right? But it turns there's no real easy way to detect IIS on a machine. There's no registry key that gives you the full version number - you can detect installation but not which version is installed. The easiest way: Request.ServerVariables["SERVER_SOFTWARE"] The easiest way to determine IIS version number is if you are already running inside of ASP.NET and you are inside of an ASP.NET request. You can look at Request.ServerVariables["SERVER_SOFTWARE"] to get a string like Microsoft-IIS/7.5 returned to you. It's a cinch to parse this to retrieve the version number. This works in the limited scenario where you need to know the version number inside of a running ASP.NET application. Unfortunately this is not a likely use case, since most times when you need to know a specific version of IIS when you are configuring or installing your application. The messy way: Match Windows OS Versions to IIS Versions Since Version 5.x of IIS versions of IIS have always been tied very closely to the Operating System. Meaning the only way to get a specific version of IIS was through the OS - you couldn't install another version of IIS on the given OS. Microsoft has a page that describes the OS version to IIS version relationship here: http://support.microsoft.com/kb/224609 In .NET you can then sniff the OS version and based on that return the IIS version. The following is a small utility function that accomplishes the task of returning an IIS version number for a given OS: /// <summary> /// Returns the IIS version for the given Operating System. /// Note this routine doesn't check to see if IIS is installed /// it just returns the version of IIS that should run on the OS. /// /// Returns the value from Request.ServerVariables["Server_Software"] /// if available. Otherwise uses OS sniffing to determine OS version /// and returns IIS version instead. /// </summary> /// <returns>version number or -1 </returns> public static decimal GetIisVersion() { // if running inside of IIS parse the SERVER_SOFTWARE key // This would be most reliable if (HttpContext.Current != null && HttpContext.Current.Request != null) { string os = HttpContext.Current.Request.ServerVariables["SERVER_SOFTWARE"]; if (!string.IsNullOrEmpty(os)) { //Microsoft-IIS/7.5 int dash = os.LastIndexOf("/"); if (dash > 0) { decimal iisVer = 0M; if (Decimal.TryParse(os.Substring(dash + 1), out iisVer)) return iisVer; } } } decimal osVer = (decimal) Environment.OSVersion.Version.Major + ((decimal) Environment.OSVersion.Version.MajorRevision / 10); // Windows 7 and Win2008 R2 if (osVer == 6.1M) return 7.5M; // Windows Vista and Windows 2008 else if (osVer == 6.0M) return 7.0M; // Windows 2003 and XP 64 bit else if (osVer == 5.2M) return 6.0M; // Windows XP else if (osVer == 5.1M) return 5.1M; // Windows 2000 else if (osVer == 5.0M) return 5.0M; // error result return -1M; } } Talk about a brute force apporach, but it works. This code goes only back to IIS 5 - anything before that is not something you possibly would want to have running. :-) Note that this is updated through Windows 7/Windows Server 2008 R2. Later versions will need to be added as needed. Anybody know what the Windows Version number of Windows 8 is?© Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  IIS   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Building a Store Locator ASP.NET Application Using Google Maps API (Part 3)

    Over the past two weeks I've showed how to build a store locator application using ASP.NET and the free Google Maps API and Google's geocoding service. Part 1 looked at creating the database to record the store locations. This database contains a table named Stores with columns capturing each store's address and latitude and longitude coordinates. Part 1 also showed how to use Google's geocoding service to translate a user-entered address into latitude and longitude coordinates, which could then be used to retrieve and display those stores within (roughly) a 15 mile area. At the end of Part 1, the results page listed the nearby stores in a grid. In Part 2 we used the Google Maps API to add an interactive map to the search results page, with each nearby store displayed on the map as a marker. The map added in Part 2 certainly improves the search results page, but the way the nearby stores are displayed on the map leaves a bit to be desired. For starters, each nearby store is displayed on the map using the same marker icon, namely a red pushpin. This makes it difficult to match up the nearby stores listed in the grid with those displayed on the map. Hovering the mouse over a marker on the map displays the store number in a tooltip, but ideally a user could click a marker to see more detailed information about the store, such as its address, phone number, a photo of the storefront, and so forth. This third and final installment shows how to enhance the map created in Part 2. Specifically, we'll see how to customize the marker icons displayed in the map to make it easier to identify which marker corresponds to which nearby store location. We'll also look at adding rich popup windows to each marker, which includes detailed store information and can be updated further to include pictures and other HTML content. Read on to learn more! Read More >

    Read the article

  • Building a Store Locator ASP.NET Application Using Google Maps API (Part 3)

    Over the past two weeks I've showed how to build a store locator application using ASP.NET and the free Google Maps API and Google's geocoding service. Part 1 looked at creating the database to record the store locations. This database contains a table named Stores with columns capturing each store's address and latitude and longitude coordinates. Part 1 also showed how to use Google's geocoding service to translate a user-entered address into latitude and longitude coordinates, which could then be used to retrieve and display those stores within (roughly) a 15 mile area. At the end of Part 1, the results page listed the nearby stores in a grid. In Part 2 we used the Google Maps API to add an interactive map to the search results page, with each nearby store displayed on the map as a marker. The map added in Part 2 certainly improves the search results page, but the way the nearby stores are displayed on the map leaves a bit to be desired. For starters, each nearby store is displayed on the map using the same marker icon, namely a red pushpin. This makes it difficult to match up the nearby stores listed in the grid with those displayed on the map. Hovering the mouse over a marker on the map displays the store number in a tooltip, but ideally a user could click a marker to see more detailed information about the store, such as its address, phone number, a photo of the storefront, and so forth. This third and final installment shows how to enhance the map created in Part 2. Specifically, we'll see how to customize the marker icons displayed in the map to make it easier to identify which marker corresponds to which nearby store location. We'll also look at adding rich popup windows to each marker, which includes detailed store information and can be updated further to include pictures and other HTML content. Read on to learn more! Read More >Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Don’t learn SSDT, learn about your databases instead

    - by jamiet
    Last Thursday I presented my session “Introduction to SSDT” at the SQL Supper event held at the offices of 7 Digital (loved the samosas, guys). I did my usual spiel, tour of the IDE, connected development, declarative database development yadda yadda yadda… and at the end asked if there were any questions. One gentleman in attendance (sorry, can’t remember your name) raised his hand and stated that by attempting to evangelise all of the features I’d missed the single biggest benefit of SSDT, that it can tell you stuff about database that you didn’t already know. I realised that he was dead right. SSDT allows you to import your whole database schema into a new project and it will instantly give you a list of errors and/or warnings pertaining to the objects in your database. Invalid references (e.g a long-forgotten stored procedure that refers to a non-existent column), unnecessary 3-part naming, incorrect case usage, syntax errors…it’ll tell you about all of ‘em! Turn on static code analysis (this article shows you how) and you’ll learn even more such as any stored procedures that begin with “sp_”, WHERE clauses that will kill performance, use of @@IDENTITY instead of SCOPE_IDENTITY(), use of deprecated syntax, implicit casts etc…. the list goes on and on. I urge you to download and install SSDT (takes a few minutes, its free and you don’t need SQL Server or Visual Studio pre-installed), start a new project: right-click on your new project and import from your database: and see what happens: You may be surprised what you discover. Let me know in the comments below what results you get, total number of objects, number of errors/warnings, I’d be interested to know! @Jamiet

    Read the article

  • SQL SERVER – Concurrency Basics – Guest Post by Vinod Kumar

    - by pinaldave
    This guest post is by Vinod Kumar. Vinod Kumar has worked with SQL Server extensively since joining the industry over a decade ago. Working on various versions from SQL Server 7.0, Oracle 7.3 and other database technologies – he now works with the Microsoft Technology Center (MTC) as a Technology Architect. Let us read the blog post in Vinod’s own voice. Learning is always fun when it comes to SQL Server and learning the basics again can be more fun. I did write about Transaction Logs and recovery over my blogs and the concept of simplifying the basics is a challenge. In the real world we always see checks and queues for a process – say railway reservation, banks, customer supports etc there is a process of line and queue to facilitate everyone. Shorter the queue higher is the efficiency of system (a.k.a higher is the concurrency). Every database does implement this using checks like locking, blocking mechanisms and they implement the standards in a way to facilitate higher concurrency. In this post, let us talk about the topic of Concurrency and what are the various aspects that one needs to know about concurrency inside SQL Server. Let us learn the concepts as one-liners: Concurrency can be defined as the ability of multiple processes to access or change shared data at the same time. The greater the number of concurrent user processes that can be active without interfering with each other, the greater the concurrency of the database system. Concurrency is reduced when a process that is changing data prevents other processes from reading that data or when a process that is reading data prevents other processes from changing that data. Concurrency is also affected when multiple processes are attempting to change the same data simultaneously. Two approaches to managing concurrent data access: Optimistic Concurrency Model Pessimistic Concurrency Model Concurrency Models Pessimistic Concurrency Default behavior: acquire locks to block access to data that another process is using. Assumes that enough data modification operations are in the system that any given read operation is likely affected by a data modification made by another user (assumes conflicts will occur). Avoids conflicts by acquiring a lock on data being read so no other processes can modify that data. Also acquires locks on data being modified so no other processes can access the data for either reading or modifying. Readers block writer, writers block readers and writers. Optimistic Concurrency Assumes that there are sufficiently few conflicting data modification operations in the system that any single transaction is unlikely to modify data that another transaction is modifying. Default behavior of optimistic concurrency is to use row versioning to allow data readers to see the state of the data before the modification occurs. Older versions of the data are saved so a process reading data can see the data as it was when the process started reading and not affected by any changes being made to that data. Processes modifying the data is unaffected by processes reading the data because the reader is accessing a saved version of the data rows. Readers do not block writers and writers do not block readers, but, writers can and will block writers. Transaction Processing A transaction is the basic unit of work in SQL Server. Transaction consists of SQL commands that read and update the database but the update is not considered final until a COMMIT command is issued (at least for an explicit transaction: marked with a BEGIN TRAN and the end is marked by a COMMIT TRAN or ROLLBACK TRAN). Transactions must exhibit all the ACID properties of a transaction. ACID Properties Transaction processing must guarantee the consistency and recoverability of SQL Server databases. Ensures all transactions are performed as a single unit of work regardless of hardware or system failure. A – Atomicity C – Consistency I – Isolation D- Durability Atomicity: Each transaction is treated as all or nothing – it either commits or aborts. Consistency: ensures that a transaction won’t allow the system to arrive at an incorrect logical state – the data must always be logically correct.  Consistency is honored even in the event of a system failure. Isolation: separates concurrent transactions from the updates of other incomplete transactions. SQL Server accomplishes isolation among transactions by locking data or creating row versions. Durability: After a transaction commits, the durability property ensures that the effects of the transaction persist even if a system failure occurs. If a system failure occurs while a transaction is in progress, the transaction is completely undone, leaving no partial effects on data. Transaction Dependencies In addition to supporting all four ACID properties, a transaction might exhibit few other behaviors (known as dependency problems or consistency problems). Lost Updates: Occur when two processes read the same data and both manipulate the data, changing its value and then both try to update the original data to the new value. The second process might overwrite the first update completely. Dirty Reads: Occurs when a process reads uncommitted data. If one process has changed data but not yet committed the change, another process reading the data will read it in an inconsistent state. Non-repeatable Reads: A read is non-repeatable if a process might get different values when reading the same data in two reads within the same transaction. This can happen when another process changes the data in between the reads that the first process is doing. Phantoms: Occurs when membership in a set changes. It occurs if two SELECT operations using the same predicate in the same transaction return a different number of rows. Isolation Levels SQL Server supports 5 isolation levels that control the behavior of read operations. Read Uncommitted All behaviors except for lost updates are possible. Implemented by allowing the read operations to not take any locks, and because of this, it won’t be blocked by conflicting locks acquired by other processes. The process can read data that another process has modified but not yet committed. When using the read uncommitted isolation level and scanning an entire table, SQL Server can decide to do an allocation order scan (in page-number order) instead of a logical order scan (following page pointers). If another process doing concurrent operations changes data and move rows to a new location in the table, the allocation order scan can end up reading the same row twice. Also can happen if you have read a row before it is updated and then an update moves the row to a higher page number than your scan encounters later. Performing an allocation order scan under Read Uncommitted can cause you to miss a row completely – can happen when a row on a high page number that hasn’t been read yet is updated and moved to a lower page number that has already been read. Read Committed Two varieties of read committed isolation: optimistic and pessimistic (default). Ensures that a read never reads data that another application hasn’t committed. If another transaction is updating data and has exclusive locks on data, your transaction will have to wait for the locks to be released. Your transaction must put share locks on data that are visited, which means that data might be unavailable for others to use. A share lock doesn’t prevent others from reading but prevents them from updating. Read committed (snapshot) ensures that an operation never reads uncommitted data, but not by forcing other processes to wait. SQL Server generates a version of the changed row with its previous committed values. Data being changed is still locked but other processes can see the previous versions of the data as it was before the update operation began. Repeatable Read This is a Pessimistic isolation level. Ensures that if a transaction revisits data or a query is reissued the data doesn’t change. That is, issuing the same query twice within a transaction cannot pickup any changes to data values made by another user’s transaction because no changes can be made by other transactions. However, this does allow phantom rows to appear. Preventing non-repeatable read is a desirable safeguard but cost is that all shared locks in a transaction must be held until the completion of the transaction. Snapshot Snapshot Isolation (SI) is an optimistic isolation level. Allows for processes to read older versions of committed data if the current version is locked. Difference between snapshot and read committed has to do with how old the older versions have to be. It’s possible to have two transactions executing simultaneously that give us a result that is not possible in any serial execution. Serializable This is the strongest of the pessimistic isolation level. Adds to repeatable read isolation level by ensuring that if a query is reissued rows were not added in the interim, i.e, phantoms do not appear. Preventing phantoms is another desirable safeguard, but cost of this extra safeguard is similar to that of repeatable read – all shared locks in a transaction must be held until the transaction completes. In addition serializable isolation level requires that you lock data that has been read but also data that doesn’t exist. Ex: if a SELECT returned no rows, you want it to return no. rows when the query is reissued. This is implemented in SQL Server by a special kind of lock called the key-range lock. Key-range locks require that there be an index on the column that defines the range of values. If there is no index on the column, serializable isolation requires a table lock. Gets its name from the fact that running multiple serializable transactions at the same time is equivalent of running them one at a time. Now that we understand the basics of what concurrency is, the subsequent blog posts will try to bring out the basics around locking, blocking, deadlocks because they are the fundamental blocks that make concurrency possible. Now if you are with me – let us continue learning for SQL Server Locking Basics. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Concurrency

    Read the article

  • Does LearnDevNow offers a useful subscription for .NET related training : with video and text reference material?

    - by user766926
    Does LearnDevNow offers a useful subscription for .NET video and text reference training material or to learn .NET? I could not find a review of www.learnnowonline.com/learndevnow I've been doing research about places that offer .NET / ASP.NET / C# / LINQ / SQL / JQuery video training that provide excellent material for developers. Kind of like a Lynda.com for back end development. However, I have not find one place that offers quality material that is easily accessible for the user, and that is priced competitive. This is what I'm looking for: Online Video tutorials with free future updates (videos that can be accessed on any device ie: android portable devices, mac / ipad, linux machines) Printable courseware (ie PDFs so that you can take notes, print if necessary, and read in a tablet in case you don;t have internet access) labs, and easy to access code Pre/Post Assessments/Exams for each training to keep track of your progress and what you have learned/skills. (Appdev used to offer this but it was way too expensive (thousands for each training ie 1k for each. I paid about five thousand at Appdev, and now I regret that purchase), and after a few years/months the training became obsolete - outdated) I looked at learnnowonline.com/learndevnow but it seems that their courseware / text reference material can only be accessed online, and don't know if their videos work in all mobile devices, and browsers (Safari, Chrome, IE, Opera, Firefox) Also, it seems Appdev not longer exist and now is called " AppDev is now LearnNowPlus : www.learnnowonline.com/appdev That site not longer offers prices. I tried to find reviews but could not find any. Does anyone know, or can share a review about some of these type of online training service providers? I would appreciate your feedback on this, and if you can share your past experiences with their service or similar/better services that would even be better. Thanks.

    Read the article

  • Is there a way to track data structure dependencies from the database, through the tiers, all the way out to a web page?

    - by Sean Mickey
    When we design applications, we generally end up with the same tiered sets of data structures: A persistent data structure that is described using DDL and implemented as RDBMS tables and columns. A set of domain objects that consist primarily of data structures, usually combined with business-rule level logic, that are implemented in a programming language such as Java. A set of service layer interfaces that directly support use case implementations (which use the domain data structures as parameters), implemented as EJBs or something equivalent in another programming language. UI screens that allow users to C reate, R etrieve, U pdate, and (maybe) D elete all manner of data structures and graphs of data structures, with numerous screens and with multiple UI widgets, all structured to support the same data structures. But if you want to change the data structures in any of these tiers, it always seems extremely difficult to assess the impact(s) the change will have across the application. UML can help, but tracing through diagram after diagram is not a real solution to this problem. The best I have ever seen was a homespun data tracking spreadsheet document that listed all of the data structures and walked the relationships from tier-to-tier. Is there a tool or accepted approach that makes it easy to identify a data structure in any tier and easily obtain a list of all dependent: database table and column data structures domain object data structures service layer interface methods and parameter data structures screen & UI component data structures

    Read the article

  • Using RIA DomainServices with ASP.NET and MVC 2

    - by Bobby Diaz
    Recently, I started working on a new ASP.NET MVC 2 project and I wanted to reuse the data access (LINQ to SQL) and business logic methods (WCF RIA Services) that had been developed for a previous project that used Silverlight for the front-end.  I figured that I would be able to instantiate the various DomainService classes from within my controller’s action methods, because after all, the code for those services didn’t look very complicated.  WRONG!  I didn’t realize at first that some of the functionality is handled automatically by the framework when the domain services are hosted as WCF services.  After some initial searching, I came across an invaluable post by Joe McBride, which described how to get RIA Service .svc files to work in an MVC 2 Web Application, and another by Brad Abrams.  Unfortunately, Brad’s solution was for an earlier preview release of RIA Services and no longer works with the version that I am running (PDC Preview). I have not tried the RC version of WCF RIA Services, so I am not sure if any of the issues I am having have been resolved, but I wanted to come up with a way to reuse the shared libraries so I wouldn’t have to write a non-RIA version that basically did the same thing.  The classes I came up with work with the scenarios I have encountered so far, but I wanted to go ahead and post the code in case someone else is having the same trouble I had.  Hopefully this will save you a few headaches! 1. Querying When I first tried to use a DomainService class to perform a query inside one of my controller’s action methods, I got an error stating that “This DomainService has not been initialized.”  To solve this issue, I created an extension method for all DomainServices that creates the required DomainServiceContext and passes it to the service’s Initialize() method.  Here is the code for the extension method; notice that I am creating a sort of mock HttpContext for those cases when the service is running outside of IIS, such as during unit testing!     public static class ServiceExtensions     {         /// <summary>         /// Initializes the domain service by creating a new <see cref="DomainServiceContext"/>         /// and calling the base DomainService.Initialize(DomainServiceContext) method.         /// </summary>         /// <typeparam name="TService">The type of the service.</typeparam>         /// <param name="service">The service.</param>         /// <returns></returns>         public static TService Initialize<TService>(this TService service)             where TService : DomainService         {             var context = CreateDomainServiceContext();             service.Initialize(context);             return service;         }           private static DomainServiceContext CreateDomainServiceContext()         {             var provider = new ServiceProvider(new HttpContextWrapper(GetHttpContext()));             return new DomainServiceContext(provider, DomainOperationType.Query);         }           private static HttpContext GetHttpContext()         {             var context = HttpContext.Current;   #if DEBUG             // create a mock HttpContext to use during unit testing...             if ( context == null )             {                 var writer = new StringWriter();                 var request = new SimpleWorkerRequest("/", "/",                     String.Empty, String.Empty, writer);                   context = new HttpContext(request)                 {                     User = new GenericPrincipal(new GenericIdentity("debug"), null)                 };             } #endif               return context;         }     }   With that in place, I can use it almost as normally as my first attempt, except with a call to Initialize():     public ActionResult Index()     {         var service = new NorthwindService().Initialize();         var customers = service.GetCustomers();           return View(customers);     } 2. Insert / Update / Delete Once I got the records showing up, I was trying to insert new records or update existing data when I ran into the next issue.  I say issue because I wasn’t getting any kind of error, which made it a little difficult to track down.  But once I realized that that the DataContext.SubmitChanges() method gets called automatically at the end of each domain service submit operation, I could start working on a way to mimic the behavior of a hosted domain service.  What I came up with, was a base class called LinqToSqlRepository<T> that basically sits between your implementation and the default LinqToSqlDomainService<T> class.     [EnableClientAccess()]     public class NorthwindService : LinqToSqlRepository<NorthwindDataContext>     {         public IQueryable<Customer> GetCustomers()         {             return this.DataContext.Customers;         }           public void InsertCustomer(Customer customer)         {             this.DataContext.Customers.InsertOnSubmit(customer);         }           public void UpdateCustomer(Customer currentCustomer)         {             this.DataContext.Customers.TryAttach(currentCustomer,                 this.ChangeSet.GetOriginal(currentCustomer));         }           public void DeleteCustomer(Customer customer)         {             this.DataContext.Customers.TryAttach(customer);             this.DataContext.Customers.DeleteOnSubmit(customer);         }     } Notice the new base class name (just change LinqToSqlDomainService to LinqToSqlRepository).  I also added a couple of DataContext (for Table<T>) extension methods called TryAttach that will check to see if the supplied entity is already attached before attempting to attach it, which would cause an error! 3. LinqToSqlRepository<T> Below is the code for the LinqToSqlRepository class.  The comments are pretty self explanatory, but be aware of the [IgnoreOperation] attributes on the generic repository methods, which ensures that they will be ignored by the code generator and not available in the Silverlight client application.     /// <summary>     /// Provides generic repository methods on top of the standard     /// <see cref="LinqToSqlDomainService&lt;TContext&gt;"/> functionality.     /// </summary>     /// <typeparam name="TContext">The type of the context.</typeparam>     public abstract class LinqToSqlRepository<TContext> : LinqToSqlDomainService<TContext>         where TContext : System.Data.Linq.DataContext, new()     {         /// <summary>         /// Retrieves an instance of an entity using it's unique identifier.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <param name="keyValues">The key values.</param>         /// <returns></returns>         [IgnoreOperation]         public virtual TEntity GetById<TEntity>(params object[] keyValues) where TEntity : class         {             var table = this.DataContext.GetTable<TEntity>();             var mapping = this.DataContext.Mapping.GetTable(typeof(TEntity));               var keys = mapping.RowType.IdentityMembers                 .Select((m, i) => m.Name + " = @" + i)                 .ToArray();               return table.Where(String.Join(" && ", keys), keyValues).FirstOrDefault();         }           /// <summary>         /// Creates a new query that can be executed to retrieve a collection         /// of entities from the <see cref="DataContext"/>.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <returns></returns>         [IgnoreOperation]         public virtual IQueryable<TEntity> GetEntityQuery<TEntity>() where TEntity : class         {             return this.DataContext.GetTable<TEntity>();         }           /// <summary>         /// Inserts the specified entity.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <param name="entity">The entity.</param>         /// <returns></returns>         [IgnoreOperation]         public virtual bool Insert<TEntity>(TEntity entity) where TEntity : class         {             //var table = this.DataContext.GetTable<TEntity>();             //table.InsertOnSubmit(entity);               return this.Submit(entity, null, DomainOperation.Insert);         }           /// <summary>         /// Updates the specified entity.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <param name="entity">The entity.</param>         /// <returns></returns>         [IgnoreOperation]         public virtual bool Update<TEntity>(TEntity entity) where TEntity : class         {             return this.Update(entity, null);         }           /// <summary>         /// Updates the specified entity.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <param name="entity">The entity.</param>         /// <param name="original">The original.</param>         /// <returns></returns>         [IgnoreOperation]         public virtual bool Update<TEntity>(TEntity entity, TEntity original)             where TEntity : class         {             if ( original == null )             {                 original = GetOriginal(entity);             }               var table = this.DataContext.GetTable<TEntity>();             table.TryAttach(entity, original);               return this.Submit(entity, original, DomainOperation.Update);         }           /// <summary>         /// Deletes the specified entity.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <param name="entity">The entity.</param>         /// <returns></returns>         [IgnoreOperation]         public virtual bool Delete<TEntity>(TEntity entity) where TEntity : class         {             //var table = this.DataContext.GetTable<TEntity>();             //table.TryAttach(entity);             //table.DeleteOnSubmit(entity);               return this.Submit(entity, null, DomainOperation.Delete);         }           protected virtual bool Submit(Object entity, Object original, DomainOperation operation)         {             var entry = new ChangeSetEntry(0, entity, original, operation);             var changes = new ChangeSet(new ChangeSetEntry[] { entry });             return base.Submit(changes);         }           private TEntity GetOriginal<TEntity>(TEntity entity) where TEntity : class         {             var context = CreateDataContext();             var table = context.GetTable<TEntity>();             return table.FirstOrDefault(e => e == entity);         }     } 4. Conclusion So there you have it, a fully functional Repository implementation for your RIA Domain Services that can be consumed by your ASP.NET and MVC applications.  I have uploaded the source code along with unit tests and a sample web application that queries the Customers table from inside a Controller, as well as a Silverlight usage example. As always, I welcome any comments or suggestions on the approach I have taken.  If there is enough interest, I plan on contacting Colin Blair or maybe even the man himself, Brad Abrams, to see if this is something worthy of inclusion in the WCF RIA Services Contrib project.  What do you think? Enjoy!

    Read the article

  • Protecting offline IRM rights and the error "Unable to Connect to Offline database"

    - by Simon Thorpe
    One of the most common problems I get asked about Oracle IRM is in relation to the error message "Unable to Connect to Offline database". This error message is a result of how Oracle IRM is protecting the cached rights on the local machine and if that cache has become invalid in anyway, this error is thrown. Offline rights and security First we need to understand how Oracle IRM handles offline use. The way it is implemented is one of the main reasons why Oracle IRM is the leading document security solution and demonstrates our methodology to ensure that solutions address both security and usability and puts the balance of these two in your control. Each classification has a set of predefined roles that the manager of the classification can assign to users. Each role has an offline period which determines the amount of time a user can access content without having to communicate with the IRM server. By default for the context model, which is the classification system that ships out of the box with Oracle IRM, the offline period for each role is 3 days. This is easily changed however and can be as low as under an hour to as long as years. It is also possible to switch off the ability to access content offline which can be useful when content is very sensitive and requires a tight leash. So when a user is online, transparently in the background, the Oracle IRM Desktop communicates with the server and updates the users rights and offline periods. This transparent synchronization period is determined by the server and communicated to all IRM Desktops and allows for users rights to be kept up to date without their intervention. This allows us to support some very important scenarios which are key to a successful IRM solution. A user doesn't have to make any decision when going offline, they simply unplug their laptop and they already have their offline periods synchronized to the maximum values. Any solution that requires a user to make a decision at the point of going offline isn't going to work because people forget to do this and will therefore be unable to legitimately access their content offline. If your rights change to REMOVE your access to content, this also happens in the background. This is very useful when someone has an offline duration of a week and they happen to make a connection to the internet 3 days into that offline period, the Oracle IRM Desktop detects this online state and automatically updates all rights for the user. This means the business risk is reduced when setting long offline periods, because of the daily transparent sync, you can reflect changes as soon as the user is online. Of course, if they choose not to come online at all during that week offline period, you cannot effect change, but you take that risk in giving the 7 day offline period in the first place. If you are added to a NEW classification during the day, this will automatically be synchronized without the user even having to open a piece of content secured against that classification. This is very important, consider the scenario where a senior executive downloads all their email but doesn't open any of it. Disconnects the laptop and then gets on a plane. During the flight they attempt to open a document attached to a downloaded email which has been secured against an IRM classification the user was not even aware they had access to. Because their new role in this classification was automatically synchronized their experience is a good one and the document opens. More information on how the Oracle IRM classification model works can be found in this article by Martin Abrahams. So what about problems accessing the offline rights database? So onto the core issue... when these rights are cached to your machine they are stored in an encrypted database. The encryption of this offline database is keyed to the instance of the installation of the IRM Desktop and the Windows user account. Why? Well what you do not want to happen is for someone to get their rights for content and then copy these files across hundreds of other machines, therefore getting access to sensitive content across many environments. The IRM server has a setting which controls how many times you can cache these rights on unique machines. This is because people typically access IRM content on more than one computer. Their work desktop, a laptop and often a home computer. So Oracle IRM allows for the usability of caching rights on more than one computer whilst retaining strong security over this cache. So what happens if these files are corrupted in someway? That's when you will see the error, Unable to Connect to Offline database. The most common instance of seeing this is when you are using virtual machines and copy them from one computer to the next. The virtual machine software, VMWare Workstation for example, makes changes to the unique information of that virtual machine and as such invalidates the offline database. How do you solve the problem? Resolution is however simple. You just delete all of the offline database files on the machine and they will be recreated with working encryption when the Oracle IRM Desktop next starts. However this does mean that the IRM server will think you have your rights cached to more than one computer and you will need to rerequest your rights, even though you are only going to be accessing them on one. Because it still thinks the old cache is valid. So be aware, it is good practice to increase the server limit from the default of 1 to say 3 or 4. This is done using the Enterprise Manager instance of IRM. So to delete these offline files I have a simple .bat file you can use; Download DeleteOfflineDBs.bat Note that this uses pskillto stop the irmBackground.exe from running. This is part of the IRM Desktop and holds open a lock to the offline database. Either kill this from task manager or use pskillas part of the script.

    Read the article

  • Oracle UPK Content Development Tool Settings

    - by [email protected]
    Oracle UPK Content Development tool settings: Before developing UPK content, your UPK Developer needs to be configured with certain standard settings to ensure the content will have a uniform look. To set the options: 1. Open the UPK Developer. 2. Click the Tools menu. 3. Click Options. After you configure the UPK Options, you can share these preferences with other content developers by exporting them to an .ops file. This is particularly useful in workgroup environments where multiple authors are working on the same content that requires consistent output regardless of who authored the content. (To learn more about Exporting/Importing Content Defaults refer to the Content Development.pdf guide that is delivered with the UPK Developer.) Here is a list of a few UPK Developer tool settings that Oracle UPK Content Developers use to develop UPK pre-built content: Screen resolution is set to 1024 x 768. See It mode frame delay is set to 5 seconds. Know It Required % is set to 70% and all three levels of remediation are selected. We opt to automatically record keyboard shortcuts. We use the default settings for the Bubble icon and Pointer position. Bubble color is yellow (Red = 255, Green = 255, Blue = 128). Bubble text is Verdana, Regular, 9 pt. ***Intro and end frame settings match the bubble settings Note: The Content Defaults String Input Settings will change based on which application (interface) you are recording against. For example here is a list of settings for different Oracle applications: • Agile - Microsoft Sans Serif, Regular, 8 • EBS - Microsoft Sans Serif, Regular, 10 • Hyperion - Microsoft Sans Serif, Regular, 8 • JDE E1 - Arial, Regular, 10 • PeopleSoft - Arial, Regular, 9 • Siebel - Arial, Regular, 8 Remember, it is recommended that you set the content defaults before you add documents and record content. When the content defaults are changed, existing documents are not affected and continue to use the defaults that were in effect when those documents were created. - Kathryn Lustenberger, Oracle UPK & Tutor Outbound Product Management

    Read the article

  • Resources for the &ldquo;What&rsquo;s New in VS 2013&rdquo; Presentation

    - by John Alexander
    Originally posted on: http://geekswithblogs.net/jalexander/archive/2013/10/24/resources-for-the-ldquowhatrsquos-new-in-vs-2013rdquo-presentation.aspxThanks for attending the “What’s New in Visual Studio 2013 (and TFS too) presentation. As promised, here are some links! Note: if you didn’t attend, its ok. This is for you, too. The bits themselves.  This article introduces new and enhanced features in Visual Studio 2013 Visual Studio Virtual Launch – Lots of Videos here and and then on November 13th, live sessions and a q and a session… What features map to what Visual Studio editions Visual Studio 2013 New Editor Features Visual Studio 2013 Application Lifecycle Management Virtual Machine and Hands-on-Labs / Demo Scripts from Brian Keller More on CodeLens from Zain Naboulsi  What are Web Essentials? You can now download Web Essentials for Visual Studio 2013 RTM. A great overview on TFS 2013 from Brian Harry The release archive lists updates made to Team Foundation Service along with which version of Team Foundation Server the updates are a part of. REST API for Team Rooms  “What's new in Visual Studio for Web Developers and Front End Devs” screencasts – quick, easy and painless from the always awesome Scott Hanselman Introducing ASP.NET Identity –-- A membership system for ASP.NET applications Visual Studio 2013 Adds New Project Templates with Improvements and Social Accounts Authentication

    Read the article

  • Suggested Web Application Framework and Database for Enterprise, “Big-Data” App?

    - by willOEM
    I have a web application that I have been developing for a small group within my company over the past few years, using Pipeline Pilot (plus jQuery and Python scripting) for web development and back-end computation, and Oracle 10g for my RDBMS. Users upload experimental genomic data, which is parsed into a database, and made available for querying, transformation, and reporting. Experimental data sets are large and have many layers of metadata. A given experimental data record might have a foreign key relationship with a table that describes this data point's assay. Assays can cover multiple genes, which can have multiple transcript, which can have multiple mutations, which can affect multiple signaling pathways, etc. Users need to approach this data from any point in those layers in the metadata. Since all data sets for a given data type can run over a billion rows, this results in some large, dynamic queries that are hard to predict. New data sets are added on a weekly basis (~1GB per set). Experimental data is never updated, but the associated metadata can be updated weekly for a few records and yearly for most others. For every data set insert the system sees, there will be between 10 and 100 selects run against it and associated data. It is okay for updates and inserts to run slow, so long as queries run quick and are as up-to-date as possible. The application continues to grow in size and scope and is already starting to run slower than I like. I am worried that we have about outgrown Pipeline Pilot, and perhaps Oracle (as the sole database). Would a NoSQL database or an OLAP system be appropriate here? What web application frameworks work well with systems like this? I'd like the solution to be something scalable, portable and supportable X-years down the road. Here is the current state of the application: Web Server/Data Processing: Pipeline Pilot on Windows Server + IIS Database: Oracle 10g, ~1TB of data, ~180 tables with several billion-plus row tables Network Storage: Isilon, ~50TB of low-priority raw data

    Read the article

  • SQL SERVER – SSMS: Backup and Restore Events Report

    - by Pinal Dave
    A DBA wears multiple hats and in fact does more than what an eye can see. One of the core task of a DBA is to take backups. This looks so trivial that most developers shrug this off as the only activity a DBA might be doing. I have huge respect for DBA’s all around the world because even if they seem cool with all the scripting, automation, maintenance works round the clock to keep the business working almost 365 days 24×7, their worth is knowing that one day when the systems / HDD crashes and you have an important delivery to make. So these backup tasks / maintenance jobs that have been done come handy and are no more trivial as they might seem to be as considered by many. So the important question like: “When was the last backup taken?”, “How much time did the last backup take?”, “What type of backup was taken last?” etc are tricky questions and this report lands answers to the same in a jiffy. So the SSMS report, we are talking can be used to find backups and restore operation done for the selected database. Whenever we perform any backup or restore operation, the information is stored in the msdb database. This report can utilize that information and provide information about the size, time taken and also the file location for those operations. Here is how this report can be launched.   Once we launch this report, we can see 4 major sections shown as listed below. Average Time Taken For Backup Operations Successful Backup Operations Backup Operation Errors Successful Restore Operations Let us look at each section next. Average Time Taken For Backup Operations Information shown in “Average Time Taken For Backup Operations” section is taken from a backupset table in the msdb database. Here is the query and the expanded version of that particular section USE msdb; SELECT (ROW_NUMBER() OVER (ORDER BY t1.TYPE))%2 AS l1 ,       1 AS l2 ,       1 AS l3 ,       t1.TYPE AS [type] ,       (AVG(DATEDIFF(ss,backup_start_date, backup_finish_date)))/60.0 AS AverageBackupDuration FROM backupset t1 INNER JOIN sys.databases t3 ON ( t1.database_name = t3.name) WHERE t3.name = N'AdventureWorks2014' GROUP BY t1.TYPE ORDER BY t1.TYPE On my small database the time taken for differential backup was less than a minute, hence the value of zero is displayed. This is an important piece of backup operation which might help you in planning maintenance windows. Successful Backup Operations Here is the expanded version of this section.   This information is derived from various backup tracking tables from msdb database.  Here is the simplified version of the query which can be used separately as well. SELECT * FROM sys.databases t1 INNER JOIN backupset t3 ON (t3.database_name = t1.name) LEFT OUTER JOIN backupmediaset t5 ON ( t3.media_set_id = t5.media_set_id) LEFT OUTER JOIN backupmediafamily t6 ON ( t6.media_set_id = t5.media_set_id) WHERE (t1.name = N'AdventureWorks2014') ORDER BY backup_start_date DESC,t3.backup_set_id,t6.physical_device_name; The report does some calculations to show the data in a more readable format. For example, the backup size is shown in KB, MB or GB. I have expanded first row by clicking on (+) on “Device type” column. That has shown me the path of the physical backup file. Personally looking at this section, the Backup Size, Device Type and Backup Name are critical and are worth a note. As mentioned in the previous section, this section also has the Duration embedded inside it. Backup Operation Errors This section of the report gets data from default trace. You might wonder how. One of the event which is tracked by default trace is “ErrorLog”. This means that whatever message is written to errorlog gets written to default trace file as well. Interestingly, whenever there is a backup failure, an error message is written to ERRORLOG and hence default trace. This section takes advantage of that and shows the information. We can read below message under this section, which confirms above logic. No backup operations errors occurred for (AdventureWorks2014) database in the recent past or default trace is not enabled. Successful Restore Operations This section may not be very useful in production server (do you perform a restore of database?) but might be useful in the development and log shipping secondary environment, where we might be interested to see restore operations for a particular database. Here is the expanded version of the section. To fill this section of the report, I have restored the same backups which were taken to populate earlier sections. Here is the simplified version of the query used to populate this output. USE msdb; SELECT * FROM restorehistory t1 LEFT OUTER JOIN restorefile t2 ON ( t1.restore_history_id = t2.restore_history_id) LEFT OUTER JOIN backupset t3 ON ( t1.backup_set_id = t3.backup_set_id) WHERE t1.destination_database_name = N'AdventureWorks2014' ORDER BY restore_date DESC,  t1.restore_history_id,t2.destination_phys_name Have you ever looked at the backup strategy of your key databases? Are they in sync and do we have scope for improvements? Then this is the report to analyze after a week or month of maintenance plans running in your database. Do chime in with what are the strategies you are using in your environments. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

  • Programming logic to group a users activities like facebook. E.g. Chris is now friends with A, B and C

    - by Chris Dowdeswell
    So I am trying to develop an activity feed for my site, Basically If I UNION a bunch of activities into a feed I would end up with something like the following. Chris is now friends with Mark Chris is now friends with Dave What I want though is a neater way of grouping these similar posts so the feed doesn't give information overload... E.g. Chris is now friends with Mark, Dave and 4 Others Any ideas on how I can approach this logically? I am using Classic ASP on SQL server. Here is the UNION statement I have so far... SELECT U.UserID As UserID, L.UN As UN,Left(U.UID,13) As ProfilePic,U.Fname + ' ' + U.Sname As FullName, 'said ' + WP.Post AS Activity, WP.Ctime FROM Users AS U LEFT JOIN Logins L ON L.userID = U.UserID LEFT OUTER JOIN WallPosts AS WP ON WP.userID = U.userID WHERE WP.Ctime IS NOT NULL UNION SELECT U.UserID As UserID, L.UN As UN,Left(U.UID,13) As ProfilePic,U.Fname + ' ' + U.Sname As FullName, 'commented ' + C.Comment AS Activity, C.Ctime FROM Users AS U LEFT JOIN Logins L ON L.userID = U.UserID LEFT OUTER JOIN Comments AS C ON C.UserID = U.userID WHERE C.Ctime IS NOT NULL UNION SELECT U.UserID As UserID, L.UN As UN,Left(U.UID,13) As ProfilePic, U.Fname + ' ' + U.Sname As FullName, 'connected with <a href="/profile.asp?un='+(SELECT Logins.un FROM Logins WHERE Logins.userID = Cn.ToUserID)+'">' + (SELECT Users.Fname + ' ' + Users.Sname FROM Users WHERE userID = Cn.ToUserID) + '</a>' AS Activity, Cn.Ctime FROM Users AS U LEFT JOIN Logins L ON L.userID = U.UserID LEFT OUTER JOIN Connections AS Cn ON Cn.UserID = U.userID WHERE CN.Ctime IS NOT NULL

    Read the article

  • Add Hotmail & Live Email Accounts to Outlook 2010

    - by Matthew Guay
    Microsoft has recently been promoting upcoming updates to their Hotmail service, promising to make it an even better webmail service. But Microsoft’s revamped Outlook 2010 is already here. Here’s how to integrate Hotmail with Outlook. Outlook 2010 works with a wide variety of email accounts, including POP3, IMAP, and Exchange accounts.  The only problem with POP3 and IMAP accounts is that they only sync email, but not your calendar and contacts like Exchange does.  Hotmail, however, lets you sync your email, contacts, and calendar with Outlook with the Hotmail Connector.  This lets you keep all of your PIM data accessible from everywhere.  Let’s look at how we can set this up on our account. Getting Started The easiest way to add Hotmail to Outlook is to first install the Outlook Hotmail Connector (link below).  Make sure Outlook is closed first, and then proceed with the installation as usual. If you enter your Hotmail account into the New Account setup in Outlook before installing the Hotmail Connector, Outlook will prompt you to download the Hotmail Connector.  However, you’ll have to exit Outlook before you can install the Connector, and then will have to re-enter your information when you restart Outlook, so it’s easier to just install it first. Add Your Hotmail Account to Outlook Now you’re ready to add your Hotmail account to Outlook.  If this is the first time you’ve run Outlook 2010, you’ll be greeted with the following screen.  Click Next to proceed with setup. Then select Yes and click Next again. If you’ve already got an email account setup in Outlook, you can add a new account by clicking File and then selecting Add account. Now, enter your Hotmail account information, and click Next. Outlook will search for your account settings and automatically setup your account with the Hotmail connector we previously installed. If you entered your password incorrectly previously, you may see the following popup.  Re-enter your password and click OK, and Outlook will re-verify your settings. Once everything’s finished and setup, you’ll see the following completion screen.  Click Finish to complete the setup and check out your Hotmail in Outlook. Welcome to your Hotmail account in Outlook 2010.  You’ll notice a small notification at the bottom of the window notifying you that you’re connected to Windows Live Hotmail.  Now your email will synchronize with your Hotmail account, and your Outlook calendar and contacts will be synced with your Live calendar and contacts, respectively.  This is the closest you can get to full Exchange without an Exchange account, and in our experience it works great.  In fact, Hotmail Sync seems to work faster than IMAP sync for us. Setup Hotmail With POP3 Access If you need to access your Hotmail email account but don’t want to install the Outlook Connector, then you can add it with POP3 sync.  We recommend going with the Outlook Connector for the best experience, but if you can’t install it (eg. you’re not allowed to install applications on your work PC) then this is a good alternative. To do this, follow our tutorial on setting up a Gmail POP3 account in Outlook. Although the article concentrates on Gmail, the settings are essentially the same. The only thing you’ll want to change is the Incoming and Outgoing mail server. Incoming mail server – pop3.live.com Outgoing mail server – smtp.live.com User name – your Hotmail or Live email address Incoming Server (POP3) – 995 Outgoing Server (SMTP) – 587 Also, check This server requires and encrypted connection Just as in the Gmail example, select TLS for the type of encrypted connection.  Then, on the bottom, make sure to uncheck the box to Remove messages from the server after a number of days.  This way your messages will still be accessible from your Hotmail account online. Conclusion Even though Hotmail is generally not as popular as Gmail, it works great with Outlook integration.  If you’re a heavy user of Windows Live services, or want to try them out, Outlook Connector is the easiest way to keep your desktop activity synced with the cloud.  If you’re just one of the millions of Hotmail users who want to access their old Hotmail account alongside their other accounts, this method works great for you too. If you’re using Outlook 2003 or 2007, check out our article on using Hotmail from Microsoft Outlook. Links Download Outlook Hotmail Connector 32-bit Download Outlook Hotmail Connector 64-bit – note, only for users of Office 2010 x64 Similar Articles Productive Geek Tips Use Hotmail from Microsoft OutlookHow to add any POP3 Email Account to HotmailHow to Send and Receive Hotmail from Your Gmail AccountAdd Your Gmail To Windows Live MailManage Your Windows Live Account in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Creating a Password Reset Disk in Windows Bypass Waiting Time On Customer Service Calls With Lucyphone MELTUP – "The Beginning Of US Currency Crisis And Hyperinflation" Enable or Disable the Task Manager Using TaskMgrED Explorer++ is a Worthy Windows Explorer Alternative Error Goblin Explains Windows Error Codes

    Read the article

  • 550 “Overwrite permission denied” when editing a file via FTP

    - by nodebunny
    DreamHost recently moved my accounts to a new shared box, and now I can't edit files via UltraEdit's built in FTP client, which messes up my work flow! What did they do that this is not working now? It stopped working after they moved me. Here's the output from the FTP console in UltraEdit 10/26/2011 10:42:36 AM: 220 DreamHost FTP Server 10/26/2011 10:42:36 AM: USER nodebunny 10/26/2011 10:42:36 AM: 331 Password required for ninjawww 10/26/2011 10:42:36 AM: PASS xxxxxxxx 10/26/2011 10:42:36 AM: 230 User nodebunny logged in 10/26/2011 10:42:36 AM: FEAT 10/26/2011 10:42:36 AM: 211-Features: LANG ja-JP.UTF-8;ja-JP;zh-TW;fr-FR;zh-CN;en-US*;bg-BG;ko-KR.UTF-8;ko-KR MDTM MFMT TVFS UTF8 MFF modify;UNIX.group;UNIX.mode; MLST modify*;perm*;size*;type*;unique*;UNIX.group*;UNIX.mode*;UNIX.owner*; REST STREAM SIZE 211 End 10/26/2011 10:42:36 AM: OPTS UTF8 ON 10/26/2011 10:42:36 AM: 200 UTF8 set to on 10/26/2011 10:42:36 AM: PWD 10/26/2011 10:42:36 AM: 257 "/" is the current directory 10/26/2011 10:42:36 AM: PWD 10/26/2011 10:42:36 AM: 257 "/" is the current directory 10/26/2011 10:42:36 AM: CWD /dev/proj/nodebunny 10/26/2011 10:42:36 AM: 250 CWD command successful 10/26/2011 10:42:36 AM: PWD 10/26/2011 10:42:36 AM: 257 "/dev/proj/nodebunny/lib/Buffer" is the current directory 10/26/2011 10:42:36 AM: PWD 10/26/2011 10:42:37 AM: 257 "/dev/proj/nodebunny/lib/Buffer" is the current directory 10/26/2011 10:42:37 AM: TYPE I 10/26/2011 10:42:37 AM: 200 Type set to I 10/26/2011 10:42:37 AM: PORT 10,15,55,125,226,16 10/26/2011 10:42:37 AM: 200 PORT command successful 10/26/2011 10:42:37 AM: STOR Buffer.pm 10/26/2011 10:42:37 AM: 550 Buffer.pm: Overwrite permission denied

    Read the article

  • IE9 RC fixed the “Internet Explorer cannot display the webpage” error when running an ASP.NET application in Visual Studio

    - by Jon Galloway
    One of the obstacles ASP.NET developers faced in using the Internet Explorer 9 Beta was the dreaded “Internet Explorer cannot display the webpage” error when running an ASP.NET application in Visual Studio. In the bug information on Connect (issue 601047), Eric Lawrence said that the problem was due to “caused by failure to failover from IPv6 to IPv4 when the connection is local.” Robert MacLean gives some more information as what was going wrong: “The problem is Windows, especially since it assumes IPv6 is better than IPv4. Note […] that when you ping localhost you get an IPv6 address. So what appears to be happening is when IE9 tries to go to localhost it uses IPv6, and the ASP.NET Development Server is IPv4 only and so nothing loads and we get the error.” The Simple Fix - Install IE 9 RC Internet Explorer 9 RC fixes this bug, so if you had tried IE 9 Beta and stopped using it due to problems with ASP.NET development, install the RC. The Workaround in IE 9 Beta If you're stuck on IE 9 Beta for some reason, you can follow Robert's workaround, which involves a one character edit to your hosts file. I've been using it for months, and it works great. Open notepad (running as administrator) and edit the hosts file (found in %systemroot%\system32\drivers\etc) Remove the # comment character before the line starting with 127.0.0.1 Save the file - if you have problems saving, it's probably because you weren't running as administrator When you're done, your hosts file will end with the following lines (assuming you were using a default hosts file setup beforehand): # localhost name resolution is handled within DNS itself.     127.0.0.1       localhost #    ::1             localhost Note: more information on editing your hosts file here. This causes Windows to default to IPv4 when resolving localhost, which will point to 127.0.0.1, which is right where Cassini - I mean the ASP.NET Web Development Server - is waiting for it.

    Read the article

  • Change Comes from Within

    - by John K. Hines
    I am in the midst of witnessing a variety of teams moving away from Scrum. Some of them are doing things like replacing Scrum terms with more commonly understood terminology. Mainly they have gone back to using industry standard terms and more traditional processes like the RAPID decision making process. For example: Scrum Master becomes Project Lead. Scrum Team becomes Project Team. Product Owner becomes Stakeholders. I'm actually quite sad to see this happening, but I understand that Scrum is a radical change for most organizations. Teams are slowly but surely moving away from Scrum to a process that non-software engineers can understand and follow. Some could never secure the education or personnel (like a Product Owner) to get the whole team engaged. And many people with decision-making authority do not see the value in Scrum besides task planning and tracking. You see, Scrum cannot be mandated. No one can force a team to be Agile, collaborate, continuously improve, and self-reflect. Agile adoptions must start from a position of mutual trust and willingness to change. And most software teams aren't like that. Here is my personal epiphany from over a year of attempting to promote Agile on a small development team: The desire to embrace Agile methodologies must come from each and every member of the team. If this desire does not exist - if the team is satisfied with its current process, if the team is not motivated to improve, or if the team is afraid of change - the actual demonstration of all the benefits prescribed by Agile and Scrum will take years. I've read some blog posts lately that criticise Scrum for demanding "Big Change Up Front." One's opinion of software methodologies boils down to one's perspective. If you see modern software development as successful, you will advocate for small, incremental changes to how it is done. If you see it as broken, you'll be much more motivated to take risks and try something different. So my question to you is this - is modern software development healthy or in need of dramatic improvement? I can tell you from personal experience that any project that requires exploration, planning, development, stabilisation, and deployment is hard. Trying to make that process better with only a slightly modified approach is a mistake. You will become completely dependent upon the skillset of your team (the only variable you can change). But the difficulty of planned work isn't one of skill. It isn't until you solve the fundamental challenges of communication, collaboration, quality, and efficiency that skill even comes into play. So I advocate for Big Change Up Front. And I advocate for it to happen often until those involved can say, from experience, that it is no longer needed. I hope every engineer has the opportunity to see the benefits of Agile and Scrum on a highly functional team. I'll close with more key learnings that can help with a Scrum adoption: Your leaders must understand Scrum. They must understand software development, its inherent difficulties, and how Scrum helps. If you attempt to adopt Scrum before the understanding is there, your leaders will apply traditional solutions to your problems - often creating more problems. Success should be measured by quality, not revenue. Namely, the value of software to an organization is the revenue it generates minus ongoing support costs. You should identify quality-based metrics that show the effect Agile techniques have on your software. Motivation is everything. I finally understand why so many Agile advocates say you that if you are not on a team using Agile, you should leave and find one. Scrum and especially Agile encompass many elegant solutions to a wide variety of problems. If you are working on a team that has not encountered these problems the the team may never see the value in the solutions.   Having said all that, I'm not giving up on Agile or Scrum. I am convinced it is a better approach for software development. But reality is saying that its adoption is not straightforward and highly subject to disruption. Unless, that is, everyone really, really wants it.

    Read the article

  • Add Your Gmail Account to Outlook 2010 using POP

    - by Matthew Guay
    Are you excited about the latest version of Outlook, and want to get it setup with your Gmail accounts?  Here’s how you can easily add your Gmail account using POP to Outlook 2010. Getting Started Log into your Gmail account an go to your settings page. Under the Forwarding and POP/IMAP tab make sure POP is enabled.  You can choose to enable POP access for all new mail that arrives from now on, or for all mail in your Gmail account.  On the second option, we suggest you chose keep Gmail’s copy in the Inbox so you can still access your emails on the Gmail server.   Add Your Account to Outlook 2010 If you haven’t run Outlook 2010 yet, click Next to start setup and add your email account. Select Yes to add an email account to Outlook.  Now you’re ready to start entering your settings to access your email. Or, if you’ve already been using Outlook and want to add a new POP account, click File and then select Add Account under Account Information.   Outlook 2010 can often automatically find and configure your account with just your email address and password, so enter these and click Next to let Outlook try to set it up automatically. Outlook will now scan for the settings for your email account. If Outlook was able to find settings and configure your account automatically, you’ll see this success screen.  Depending on your setup, Gmail is automatically setup, but sometimes it fails to find the settings.  If this is the case, we’ll go back and manually configure it. Manually Configure Outlook for Gmail Back at the account setup screen, select Manually configure server settings or additional server types and click Next. Select Internet E-mail and then click Next. Enter your username, email address, and log in information. Under Server information enter in the following: Account Type: POP3 Incoming mail server: pop.gmail.com Outgoing mail server: smtp.gmail.com Make sure to check Remember password so you don’t have to enter it every time. After that data is entered in, click on the More Settings button. Select the Outgoing Server tab, and check My outgoing server (SMTP) requires authentication.  Verify Use same settings as my incoming mail server is marked as well. Next select the Advanced tab and enter the following information: Incoming Server (POP3): 995 Outgoing server (SMTP): 587 Check This server requires an encrypted connection (SSL) Set Use the following type of encrypted connection to TLS You also might want to uncheck the box to Remove messages from the server after a number of days.  This way your messages will still be accessible from Gmail online. Click OK to close the window, and then click Next to finish setting up the account.  Outlook will test your account settings to make sure everything will work; click Close when this is finished. Provided everything was entered in correctly, you’ll be greeted with a successful setup message…click Finish.   Gmail will be all ready to sync with Outlook 2010.  Enjoy your Gmail account in Outlook, complete with fast indexed searching, conversation view, and more! Conclusion Adding Gmail using the POP setting to Outlook 2010 is usually easy and only takes a few steps.  Even if you have to enter your settings manually, it is still a fairly simple process. You can add multiple email accounts using POP3 if you wish, and if you’d like to sync IMAP accounts, check out our tutorial on setting up Gmail using IMAP in Outlook 2010. Similar Articles Productive Geek Tips Add Your Gmail To Windows Live MailAdd Your Gmail Account to Outlook 2007Use Gmail IMAP in Microsoft Outlook 2007Figure out which Online accounts are selling your email to spammersAdd Your Gmail Account to Outlook 2010 Using IMAP TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Bypass Waiting Time On Customer Service Calls With Lucyphone MELTUP – "The Beginning Of US Currency Crisis And Hyperinflation" Enable or Disable the Task Manager Using TaskMgrED Explorer++ is a Worthy Windows Explorer Alternative Error Goblin Explains Windows Error Codes Twelve must-have Google Chrome plugins

    Read the article

  • Ada and 'The Book'

    - by Phil Factor
    The long friendship between Charles Babbage and Ada Lovelace created one of the most exciting and mysterious of collaborations ever to have resulted in a technological breakthrough. The fireworks that created by the collision of two prodigious mathematical and creative talents resulted in an invention, the Analytical Engine, which went on to change society fundamentally. However, beyond that, we just don't know what the bulk of their collaborative work was about:;  it was done in strictest secrecy. Even the known outcome of their friendship, the first programmable computer, was shrouded in mystery. At the time, nobody, except close friends and family, had any idea of Ada Byron's contribution to the invention of the ‘Engine’, and how to program it. Her great insight was published in August 1843, under the initials AAL, standing for Ada Augusta Lovelace, her title then being the Countess of Lovelace. It was contained in a lengthy ‘note’ to her translation of a publication that remains the best description of Babbage's amazing Analytical Engine. The secret identity of the person behind those enigmatic initials was finally revealed by Prince de Polignac who, seventy years later, wrote to Ada's daughter to seek confirmation that her mother had, indeed, been the author of the brilliant sentences that described so accurately how Babbage's mechanical computer could be programmed with punch-cards. L.F. Menabrea's paper on the Analytical Engine first appeared in the 'Bibliotheque Universelle de Geneve' in October 1842, and Ada translated it anonymously for Taylor's 'Scientific Memoirs'. Charles Babbage was surprised that she had not written an original paper as she already knew a surprising amount about the way the machine worked. He persuaded her to at least write some explanatory notes. These notes ended up extending to four times the length of the original article and represented the first published account of how a machine could be programmed to perform any calculation. Her example of programming the Bernoulli sequence would have worked on the Analytical engine had the device’s construction been completed, and gave Ada an unassailable claim to have invented the art of programming. What was the reason for Ada's secrecy? She was the only legitimate child of Lord Byron, who was probably the best known celebrity of the age, so she was already famous. She was a senior aristocrat, with titles, a fortune in money and vast estates in the Midlands. She had political influence, and was the cousin of Lord Melbourne, who was the Prime Minister at that time. She was friendly with the young Queen Victoria. Her mathematical activities were a pastime, and not one that would be considered by others to be in keeping with her roles and responsibilities. You wouldn't dare to dream up a fictional heroine like Ada. She was dazzlingly beautiful and talented. She could speak several languages fluently, and play some musical instruments with professional skill. Contemporary accounts refer to her being 'accomplished in science, art and literature'. On top of that, she was a brilliant mathematician, a talent inherited from her mother, Annabella Milbanke. In her mother's circle of literary and scientific friends was Charles Babbage, and Ada's friendship with him dates from her teenage zest for Mathematics. She was one of the first people he'd ever met who understood what he had attempted to achieve with the 'Difference Engine', and with whom he could converse as intellectual equals. He arranged for her to have an education from the most talented academics in the country. Ada melted the heart of the cantankerous genius to the point that he became a faithful and loyal father-figure to her. She was one of the very few who could grasp the principles of the later, and very different, ‘Analytical Engine’ which was designed from the start to tackle a variety of tasks. Sadly, Ada Byron's life ended less than a decade after completing the work that assured her long-term fame, in November 1852. She was dying of cancer, her gambling habits had caused her to run up huge debts, she'd had more than one affairs, and she was being blackmailed. Her brilliant but unempathic mother was nursing her in her final illness, destroying her personal letters and records, and repaying her debts. Her husband was distraught but helpless. Charles Babbage, however, maintained his steadfast paternalistic friendship to the end. She appointed her loyal friend to be her executor. For years, she and Babbage had been working together on a secret project, known only as 'The Book'. We have a clue to what it was in a letter written by her nine years earlier, on 11th August 1843. It was a joint project by herself and Lord Lovelace, her husband, and was intended to involve Babbage's 'undivided energies'. It involved 'consulting your Engine' (it required Babbage’s computer). The letter gives no hint about the project except for the high-minded nature of its purpose, and its highly mathematical nature.  From then on, the surviving correspondence between the two gives only veiled references to 'The Book'. There isn't much, since Babbage later destroyed any letters that could have damaged her reputation within the Establishment. 'I cannot spare the book today, which I am very sorry for. At the moment I want it for constant reference, but I think you can have it tomorrow' (Oct 1844)  And 'I will send you the book directly, and you can say, when you receive it, how long you will want to keep it'. (Nov 1844)  The two of them were obviously intent on the work: She writes, four years later, 'I have an engagement for Wednesday which will prevent me from attending to your wishes about the book' (Dec 1848). This was something that they both needed to work on, but could not do in parallel: 'I will send the book on Tuesday, and it can be left with you till Friday' (11 Feb 1849). After six years work, it had been so well-handled that it was beginning to fall apart: 'Don't forget the new cover you promised for the book. The poor book is very shabby and wants one' (20 Sept 1849). So what was going on? The word 'book' was not a code-word: it was a real book, probably a 'printer's blank', plain paper, but properly bound so printers and publishers could show off how the published work might look. The hints from the correspondence are of advanced mathematics. It is obvious that the book was travelling between them, back and forth, each one working on it for less than a week before passing it back. Ada and her husband were certainly involved in gambling large sums of money on the horses, and so most biographers have concluded that the three of them were trying to calculate the mathematical odds on the horses. This theory has three large problems. Firstly, Ada's original letter proposing the project refers to its high-minded nature. Babbage was temperamentally opposed to gambling and would scarcely have given so much time to the project, even though he was devoted to Ada. Secondly, Babbage would have very soon have realized the hopelessness of trying to beat the bookies. This sort of betting never attracts his type of intellectual background. The third problem is that any work on calculating the odds on horses would not need a well-thumbed book to pass back and forth between them; they would have not had to work in series. The original project was instigated by Ada, along with her husband, William King-Noel, 1st Earl of Lovelace. Charles Babbage was invited to join the project after the couple had come up with the idea. What could William have contributed? One might assume that William was a Bertie Wooster character, addicted only to the joys of the turf, but this was far from the truth. He was a scientist, a Cambridge graduate who was later elected to be a Fellow of the Royal Society. After Eton, he went to Trinity College, Cambridge. On graduation, he entered the diplomatic service and acted as secretary under Lord Nugent, who was Lord Commissioner of the Ionian Islands. William was very friendly with Babbage too, able to discuss scientific matters on equal terms. He was a capable engineer who invented a process for bending large timbers by the application of steam heat. He delivered a paper to the Institution of Civil Engineers in 1849, and received praise from the great engineer, Isambard Kingdom Brunel. As well as being Lord Lieutenant of the County of Surrey for most of Victoria's reign, he had time for a string of scientific and engineering achievements. Whatever the project was, it is unlikely that William was a junior partner. After Ada's death, the project disappeared. Then, two years later, Babbage, through one of his occasional outbursts of temper, demonstrated that he was able to decrypt one of the most powerful of secret codes, Vigenère's autokey cipher.  All contemporary diplomatic and military messages used a variant of this cipher. Babbage had made three important discoveries, namely, the mathematical law of this cipher, the principle of the key periodicity, and the technique of the symmetry of position. The technique is now known as the Kasiski examination, also called the Kasiski test, but Babbage got there first. At one time, he listed amongst his future projects, the writing of a book 'The Philosophy of Decyphering', but it never came to anything. This discovery was going to change the course of history, since it was used to decipher the Russians’ military dispatches in the Crimean war. Babbage himself played a role during the Crimean War as a cryptographical adviser to his friend, Rear-Admiral Sir Francis Beaufort of the Admiralty. This is as much as we can be certain about in trying to make sense of the bulk of the time that Charles Babbage and Ada Lovelace worked together. Nine years of intensive work, involving the 'Engine' and a great deal of mathematics and research seems to have been lost: or has it? I've argued in the past http://www.simple-talk.com/community/blogs/philfactor/archive/2008/06/13/59614.aspx that the cracking of the Vigenère autokey cipher, was a fundamental motive behind the British Government's support and funding of the 'Difference Engine'. The Duke of Wellington, whose understanding of the military significance of being able to read enemy dispatches, was the most steadfast advocate of the project. If the three friends were actually doing the work of cracking codes by mathematical techniques that used the techniques of key periodicity, and symmetry of position (the use of a book being passed quickly to and fro is very suggestive), intending to then use the 'Engine' to do the routine cracking of each dispatch, then this is a rather different story. The project was Ada and William's idea. (William had served in the diplomatic service and would be familiar with the use of codes). This makes Ada Lovelace the initiator of a project which, by giving both Britain, and probably the USA, a diplomatic and military advantage in the second part of the Nineteenth century, changed world history. Ada would never have wanted any credit for cracking the cipher, and developing the method that rendered all contemporary military and diplomatic ciphering techniques nugatory; quite the reverse. And it is clear from the gaps in the record of the letters between the collaborators that the evidence was destroyed, probably on her request by her irascible but intensely honorable executor, Charles Babbage. Charles Babbage toyed with the idea of going public, but the Crimean war put an end to that. The British Government had a valuable secret, and intended to keep it that way. Ada and Charles had quite often discussed possible moneymaking projects that would fund the development of the Analytic Engine, the first programmable computer, but their secret work was never in the running as a potential cash cow. I suspect that the British Government was, even then, working on the concealment of a discovery whose value to the nation depended on it remaining so. The success of code-breaking in the Crimean war, and the American Civil war, led to the British and Americans  subsequently giving much more weight and funding to the science of decryption. Paradoxically, this makes Ada's contribution even closer to the creation of Colossus, the first digital computer, at Bletchley Park, specifically to crack the Nazi’s secret codes.

    Read the article

  • Extending ASP.NET Output Caching

    One of the most sure-fire ways to improve a web application's performance is to employ caching. Caching takes some expensive operation and stores its results in a quickly accessible location. Since it's inception, ASP.NET has offered two flavors of caching: Output Caching - caches the entire rendered markup of an ASP.NET page or User Control for a specified duration.Data Caching - a API for caching objects. Using the data cache you can write code to add, remove, and retrieve items from the cache.Until recently, the underlying functionality of these two caching mechanisms was fixed - both cached data in the web server's memory. This has its drawbacks. In some cases, developers may want to save output cache content to disk. When using the data cache you may want to cache items to the cloud or to a distributed caching architecture like memcached. The good news is that with ASP.NET 4 and the .NET Framework 4, the output caching and data caching options are now much more extensible. Both caching features are now based upon the provider model, meaning that you can create your own output cache and data cache providers (or download and use a third-party or open source provider) and plug them into a new or existing ASP.NET 4 application. This article focuses on extending the output caching feature. We'll walk through how to create a custom output cache provider that caches a page or User Control's rendered output to disk (as opposed to memory) and then see how to plug the provider into an ASP.NET application. A complete working example, available in both VB and C#, is available for download at the end of this article. Read on to learn more! Read More >

    Read the article

  • Extending ASP.NET Output Caching

    One of the most sure-fire ways to improve a web application's performance is to employ caching. Caching takes some expensive operation and stores its results in a quickly accessible location. Since it's inception, ASP.NET has offered two flavors of caching: Output Caching - caches the entire rendered markup of an ASP.NET page or User Control for a specified duration.Data Caching - a API for caching objects. Using the data cache you can write code to add, remove, and retrieve items from the cache.Until recently, the underlying functionality of these two caching mechanisms was fixed - both cached data in the web server's memory. This has its drawbacks. In some cases, developers may want to save output cache content to disk. When using the data cache you may want to cache items to the cloud or to a distributed caching architecture like memcached. The good news is that with ASP.NET 4 and the .NET Framework 4, the output caching and data caching options are now much more extensible. Both caching features are now based upon the provider model, meaning that you can create your own output cache and data cache providers (or download and use a third-party or open source provider) and plug them into a new or existing ASP.NET 4 application. This article focuses on extending the output caching feature. We'll walk through how to create a custom output cache provider that caches a page or User Control's rendered output to disk (as opposed to memory) and then see how to plug the provider into an ASP.NET application. A complete working example, available in both VB and C#, is available for download at the end of this article. Read on to learn more! Read More >Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Great Free Courses on Building HTML5 apps using ASP.NET Web API, Knockout.js and jQuery

    - by ScottGu
    Pluralsight has developed some great training courses on the new .NET 4.5 and VS 2012 release, including two fantastic courses from John Papa that cover how to build HTML5 web apps using ASP.NET Web API, Knockout and jQuery: Single Page Apps with HTML5, Web API, Knockout and jQuery Building HTML5 and JavaScript Apps with MVVM and Knockout Free 1-Month Subscription to the Courses Pluralsight is offering a special promotion that allows you to get a free 1-month subscription to watch the above courses at no cost.  There is no obligation to buy anything at the end of the offer and you don’t need to supply a credit card in order to take part in it. To get access to the course you simply follow @pluralsight and @john_papa on Twitter and then visit this page and enter your Twitter name using the form on it.  Pluralsight will then send you a private twitter message containing the access code that you can use to subscribe to the courses (and download the course exercise files).  Once you are subscribed to the course you have one month to watch the course (and you can watch it as many times as you want during the month). Pluralsight is running the promotion through Sept 18th – so sign-up now to get access.  Once you are signed up you then have a month to watch the course. Hope this helps, Scott P.S. And if you are new to Twitter you can also optionally follow me: @scottgu

    Read the article

  • Backup options in SharePoint 2007

    - by sreejukg
    It is very important to make sure the server farm backup is taking properly, making sure that in case of any disaster, the administrator has the latest backup that can be used to restore. This articles addresses some of the options available for backup/restore in SharePoint 2007 Backup There are two options that can be utilized to take backup of SharePoint sites. Using SharePoint Central Administration website Using SharePoint central administration website, you can do backup/restore from user interface. Using central administration website you can back up the following · Server farm · Web application · Content databases Follow these steps to take backup of the server farm using central administration 1. Open Central administration website 2. Navigate to Operations -> Backup and Restore -> Perform a backup 3. Here you will have options to choose the item to back up. Select Farm (the top most item in the list) 4. Once you select the items to backup, click on “Continue to backup options” 5. Select “Full” as type of backup. 6. In the backup file location, enter the path where you need to store the backup. The path should be according to the UNC, for e.g. for c drive you may use \\server\c$\mybackupFolder 7. Click ok 8. Now you will be redirected to Backup and Restore Status page. This page shows the progress for the backup operation. You can use the refresh button to update the status of backup(this page will automatically refresh in every 30 seconds). Once completed you can find the files in the specified folder. Using STSADM website SharePoint comes with a STSADM command line tool. STSADM provides lot of administrative operations that can be performed on SharePoint 2007 sites. You can find STSADM command from the following location C:\Program Files\Common Files\Microsoft shared\web server extensions\12\bin (You may change the drive letter according to your installation) STSADM provides a method for performing the Office SharePoint Server 2007 administration tasks at the command line or by using batch files or scripts. STSADM provides access to operations not available by using the Central Administration site The general syntax for STSADM is as follows STSADM -operation Operation Name –parameter1 value1 –parameter2 value2 ……….. Using STSADM you can back up the following · Server farm · Web application · Content databases To perform any STSADM, operation you need to be a member of administrators group. Follow these steps to take backup of SharePoint server farm using STSADM tool. Note: make sure you are logged in to the computer where central administration website is installed. 1. Open the Command prompt (You should run command prompt with administrator privileges) 2. Change the working directory to C:\Program Files\Common Files\Microsoft shared\web server extensions\12\bin 3. Enter the command, then press enter Stsadm –o backup -directory <UNC path> -backupmethod full 4. You will get success / failure message once the command finishes. How to schedule the backup There is no option to schedule a backup using central administration site. Also there is no operation provided by STSADM to automate the backup. The farm administrators need to take backup in regular intervals. To achieve this, you can write a batch file that includes STSADM command to take full backup of the server. This batch file can be scheduled using windows task scheduler to execute in certain intervals. Sample of the batch file 1. Open notepad(or any other text editor) 2. Enter the following commands @echo off echo =============================================================== echo Back up the farm to <C:\backup> echo =============================================================== cd %COMMONPROGRAMFILES%\Microsoft Shared\web server extensions\12\BIN @echo off stsadm.exe -o backup -directory "<\backup>" -backupmethod full echo completed 3. Save the file with .bat extension You can schedule this batch file as you require. Other Options Using STSADM tool, you will be able to take backup for individual site collection. The syntax for this is stsadm -o backup -url <URL name for site collection> -filename <file name> [-overwrite] The explanations for the parameters are as follows. -url The url of the site collection you need to backup -filename The name of the backup file. E.g. c:\backup.bak -overwrite optional. Indicates if the filename specified exists, whether to overwrite or not. If you are creating the batch file for scheduling the backup for a site collection, you may need to specify the backup filename automatically created. It is an option that you can generate the filename with date so that you can keep backup for each day. e.g. The following commands can be utilized create a site collection backup. @echo off echo =============================================================== echo Back up the farm to <C:\backup> echo =============================================================== echo =============================================================== echo getting todays date to a variable echo =============================================================== @For /F "tokens=1,2,3 delims=/ " %%A in (‘Date /t’) do @( Set Day=%%A Set Month=%%B Set Year=%%C Set todayDate=%%C%%B%%A ) cd %COMMONPROGRAMFILES%\Microsoft Shared\web server extensions\12\BIN @echo off stsadm -o backup -url <sitecollection url> -filename \\ServerName\ShareName\Backup_%todayDate%.bak -overwrite echo completed To read more about backup STSADM operation, read this http://technet.microsoft.com/en-us/library/cc263441.aspx

    Read the article

  • Oracle Social Network -The Social Glue for Enterprise Applications

    - by me
    Tom Petrocelli of Enterprise Strategy Group published a report recently, “Oracle Social Network: The Social Glue for Enterprise Applications”, on Oracle Social Network (OSN) and how traditional social products create social silos whereas OSN is the “social glue” for enterprise applications.  This report supports the point of Oracle’s Social Business Strategy to seamless integrate social capabilities into the main business processes. Quote from report: “Oracle has adopted the correct approach to creating a social layer and socially enabled applications. Oracle Social Network is not simply another enterprise social network product; it is a complete social layer for the enterprise application stack. This approach will serve Oracle users well in the future.” OSN allow to capture the related Conversations of a business process right where it’s happens – within the respective Business application.  Fusion CRM is an excellent example for this approach. Quote from report: “Oracle’s new software, Oracle Social Network, is an example of a solution to the silo problem. While Oracle fields a typical enterprise social network application with microblogging, file sharing, shared documents or wikis, and activity streams, the front-end application is only a small part of what Oracle Social Network does. Instead, Oracle Social Network is a platform that provides social features as a service to other enterprise applications. In effect, Oracle Social Network socially enables all of Oracle’s enterprise applications—all enterprise applications really—with not only the same features, but also the same conversations. As a result, the social conversations act as a conduit for inter-application communication and collaboration.” Source: ESG Research Report, Oracle Social Network: The Social Glue for Enterprise Applications, August 2012. cross-post from Oracle WebCenter blog

    Read the article

< Previous Page | 855 856 857 858 859 860 861 862 863 864 865 866  | Next Page >