Search Results

Search found 76491 results on 3060 pages for 'web setup project'.

Page 202/3060 | < Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >

  • When to delete a branch in Git

    - by Jo-Herman Haugholt
    I have a script project I've been managing with Git. Besides two main branches, several minor branches have been introduced over time to cover minor features, tweaks or temporary changes. Some of these branches are nearing end-of-life, and I won't be updating them any more. What's the different philosophies for handling branches like this? Should they be removed, or left in the repository unmaintained? If I do, won't I end up with a cluttered repository?

    Read the article

  • Siemens AG, Sector Healthcare, Increases Transparency and Improves Customer Loyalty with Web Portal Solution

    - by Kellsey Ruppel
    Siemens AG, Sector Healthcare, Increases Transparency and Improves Customer Loyalty with Web Portal Solution CUSTOMER AND PARTNER INFORMATION Customer Name – Siemens AG, Sector Healthcare Customer Revenue – 73,515 Billion Euro (2011, Siemens AG total) Customer Quote – “The realization of our complex requirements within a very short amount of time was enabled through the competent implementation partner Sapient, who fully used the  very broad scope of standard functionality provided in the Oracle WebCenter Portal, and the management of customer services, who continuously supported the project setup. ” – Joerg Modlmayr, Project Manager, Healthcare Customer Service Portal, Siemens AG The Siemens Healthcare Sector is one of the world's largest suppliers to the healthcare industry and a trendsetter in medical imaging, laboratory diagnostics, medical information technology and hearing aids. Siemens offers its customers products and solutions for the entire range of patient care from a single source – from prevention and early detection to diagnosis, and on to treatment and aftercare. By optimizing clinical workflows for the most common diseases, Siemens also makes healthcare faster, better and more cost-effective. To ensure greater transparency, increased efficiency, higher user acceptance, and additional services, Siemens AG, Sector Healthcare, replaced several existing legacy portal solutions that could not meet the company’s future needs with Oracle WebCenter Portal. Various existing portal solutions that cannot meet future demands will be successively replaced by the new central service portal, which will also allow for the efficient and intuitive implementation of new service concepts.  With Oracle, doctors and hospitals using Siemens medical solutions now have access to a central information portal that provides important information and services at just the push of a button.  Customer Name – Siemens AG, Sector Healthcare Customer URL – www.siemens.com Customer Headquarters – Erlangen, Germany Industry – Industrial Manufacturing Employees – 360,000  Challenges – Replace disparate medical service portals to meet future demands and eliminate an  unnecessarily high level of administrative work caused by heterogeneous installations Ensure portals meet current user demands to improve user-acceptance rates and increase number of total users Enable changes and expansion through standard functionality to eliminate the need for reliance on IT and reduce administrative efforts and associated high costs Ensure efficient and intuitive implementation of new service concepts for all devices and systems Ensure hospitals and clinics to transparently monitor and measure services rendered for the various medical devices and systems  Increase electronic interaction and expand services to achieve a higher level of customer loyalty Solution –  Deployed Oracle WebCenter Portal to ensure greater transparency, and as a result, a higher level of customer loyalty  Provided a centralized platform for doctors and hospitals using Siemens’ medical technology solutions that provides important information and services at the push of a button Reduced significantly the administrative workload by centralizing the solution in the new customer service portal Secured positive feedback from customers involved in the pilot program developed by design experts from Oracle partner Sapient. The interfaces were created with customer needs in mind. The first survey taken shortly after implementation came back with 2.4 points on a scale of 0-3 in the category “customer service portal intuitiveness level” Met all requirements including alignment with the Siemens Style Guide without extensive programming Implemented additional services via the portal such as benchmarking options to ensure the optimal use of the Customer Device Park Provided option for documentation of all services rendered in conjunction with the medical technology systems to ensure that the value of the services are transparent for the decision makers in the hospitals  Saved and stored all machine data from approximately 100,000 remote systems in the central service and information platform Provided the option to register errors online and follow the call status in real-time on the portal Made  available at the push of a button all information on the medical technology devices used in hospitals or clinics—from security checks and maintenance activities to current device statuses Provided PDF format Service Performance Reports that summarize information from periods of time ranging from previous weeks up to one year, meeting medical product law requirements  Why Oracle – Siemens AG favored Oracle for many reasons, however, the company ultimately decided to go with Oracle due to the enormous range of functionality the solutions offered for the healthcare sector.“We are not programmers; we are service providers in the medical technology segment and focus on the contents of the portal. All the functionality necessary for internet-based customer interaction is already standard in Oracle WebCenter Portal, which is a huge plus for us. Having Oracle as our technology partner ensures that the product will continually evolve, providing a strong technology platform for our customer service portal well into the future,” said Joerg Modlmayr project manager, Healthcare Customer Service Portal, Siemens AG. Partner Involvement – Siemens AG selected Oracle Partner Sapient because the company offered a service portfolio that perfectly met Siemens’ requirements and had a wealth of experience implementing Oracle WebCenter Portal. Additionally, Sapient had designers with a very high level of expertise in usability—an aspect that Siemens considered to be of vast importance for the project.  “The Sapient team completely met all our expectations. Our tightly timed project was completed on schedule, and the positive feedback from our users proves that we set the right measures in terms of usability—all thanks to the folks at Sapient,” Modlmayr said.  Partner Name – Sapient GmbH Deutschland Partner URL – www.sapient.com

    Read the article

  • When Is It Acceptable to NOT Fix Broken Windows?

    - by Bullines
    In reference to broken windows... Are there times when refactoring is best left for a future activity? For example, if a project to add some new features to an existing internal system is assigned to a team that has not worked with the system until now, and is given a short timeline in which to work with - can it be ever be justifiable to defer major refactorings to existing code for the sake of making the deadline in this scenario?

    Read the article

  • How do great enterprises estimate software development efforts?

    - by Ed Pichler
    I was learning about how to estimate software development effort, and would like to know how successful enterprises estimate their projects. How they do to know how much time a system will spend to be developed? What are the modern techniques to do this? What are the techniques used by these modern enterprises? Some articles and interviews of employees of those enterprises would be interesting. I asked on Project Management site of StackExchange too.

    Read the article

  • What to do when you inherit an unmaintainable codebase?

    - by GordonM
    I'm currently working at a company with 2 other PHP developers aside from me, and 1 junior developer. The senior developer who originally built the system we're all working on has resigned and will only be here for a matter of weeks. The other developer, who is the only other guy who knows anything about the system, is unhappy here and is looking for a new job. I'm very real danger of being left behind as the only experienced developer on this codebase. Since I've joined this company I've tried to push for better coding standards, project documentation, etc and I do think I've made some headway, but the vast majority of the code is simply unmaintainable and uncommented. A lot of this has to do with the need to get things done fast at points in the project before I joined, but now the technical debt is enormous, even with the two developers who do understand the system on board. Without them, it will simply be impossible to do anything with it. The senior developer is working on trying to at least comment all his code before he leaves but I think the codebase is simply too vast to properly document in the remaining time. Besides, when he does comment it still doesn't make things as clear as it could. If the system was better organized and documented I could probably start refactoring it incrementally, but the whole thing is so tightly coupled that it's very difficult to make any changes in one module without having unintended knock-on effects in other modules. Naturally, there's no unit tests either, and I honestly don't think this codebase could possibly be unit tested anyway given how it's implemented. There also never seems to be enough time to get things done even with 3 developers and 1 junior developer. With one developer and one junior, neither of which had significant input into the early design of the system, I don't see how we could possibly get anything done with keeping the current system working, implementing new features as needed and developing a replacement for the current codebase that is better organized. Is there an approach I can take to cope with this situation, or should I be getting my own CV in order as well at this point? If it was just me and the junior designer who would be left I'd go for the latter option almost without question. However, there's a team of front-end developers and content managers as well, and I'm worried what would become of them if I left and put them in a position where there would be no developers at all. The department might just be closed down altogether under such circumstances, and then I'd have their unemployment on my conscience as well!

    Read the article

  • INI files or Registry or personal files

    - by Shirish11
    I want to save the configuration of my project. Which includes Screen size Screen Position Folder paths Users settings and so on. The standard places where you can save these are configuration values are: Registry INI files Personal files (like *.cfg) I would like to know how do you choose between these places? Also are there any pros & cons of using any of them?

    Read the article

  • What is an “implementation plan”?

    - by Abe Miessler
    I was recently given the task of creating an implementation plan document. When I asked for an example of one that I could look at, I was told to look at the Project Plan that had already been created an use that as a base. I'm still a bit confused on what I should be creating. Can anyone point me to a good example out there or to something that explains what this is and more importantly the details about what it should contain.

    Read the article

  • How can teams collaborate on Unity 3D projects?

    - by nosferat
    With a friend of mine, we are planning to develop a small game to get the hang of game development and teamwork. But since Unity 3D barely supports version control (or at least the free version lacks of it) we have no idea how to efficiently manage teamwork. Sharing tasks in a small project is also seems like a challange for us. I would also appreciate any advice that could be useful for beginner indie developers related to teamwork. :)

    Read the article

  • What are some known approaches to collaborative schema design?

    - by Omega
    If a project has multiple developers, each with useful knowledge & experience that can aide in the design of a schema; what are some known processes to collaboratively plan that schema out? Are there any types of meetings that are useful for this purpose? This would be in contrast to circumstances where projects are started and models are developed unilaterally by coincidence rather than as part of a structured understanding of the domain.

    Read the article

  • asp.net mvc custom model binder

    - by mike
    pleas help guys, my custom model binder which has been working perfectly has starting giving me errors details below An item with the same key has already been added. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.ArgumentException: An item with the same key has already been added. Source Error: Line 31: { Line 32: string key = bindingContext.ModelName; Line 33: var doc = base.BindModel(controllerContext, bindingContext) as Document; Line 34: Line 35: // DoBasicValidation(bindingContext, doc); Source File: C:\Users\Bich Vu\Documents\Visual Studio 2008\Projects\PitchPortal\PitchPortal.Web\Binders\DocumentModelBinder.cs Line: 33 Stack Trace: [ArgumentException: An item with the same key has already been added.] System.ThrowHelper.ThrowArgumentException(ExceptionResource resource) +51 System.Collections.Generic.Dictionary2.Insert(TKey key, TValue value, Boolean add) +7462172 System.Linq.Enumerable.ToDictionary(IEnumerable1 source, Func2 keySelector, Func2 elementSelector, IEqualityComparer1 comparer) +270 System.Linq.Enumerable.ToDictionary(IEnumerable1 source, Func2 keySelector, IEqualityComparer1 comparer) +102 System.Web.Mvc.ModelBindingContext.get_PropertyMetadata() +157 System.Web.Mvc.DefaultModelBinder.BindProperty(ControllerContext controllerContext, ModelBindingContext bindingContext, PropertyDescriptor propertyDescriptor) +158 System.Web.Mvc.DefaultModelBinder.BindProperties(ControllerContext controllerContext, ModelBindingContext bindingContext) +90 System.Web.Mvc.DefaultModelBinder.BindComplexElementalModel(ControllerContext controllerContext, ModelBindingContext bindingContext, Object model) +50 System.Web.Mvc.DefaultModelBinder.BindComplexModel(ControllerContext controllerContext, ModelBindingContext bindingContext) +1048 System.Web.Mvc.DefaultModelBinder.BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext) +280 PitchPortal.Web.Binders.documentModelBinder.BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext) in C:\Users\Bich Vu\Documents\Visual Studio 2008\Projects\PitchPortal\PitchPortal.Web\Binders\DocumentModelBinder.cs:33 System.Web.Mvc.ControllerActionInvoker.GetParameterValue(ControllerContext controllerContext, ParameterDescriptor parameterDescriptor) +257 System.Web.Mvc.ControllerActionInvoker.GetParameterValues(ControllerContext controllerContext, ActionDescriptor actionDescriptor) +109 System.Web.Mvc.ControllerActionInvoker.InvokeAction(ControllerContext controllerContext, String actionName) +314 System.Web.Mvc.Controller.ExecuteCore() +105 System.Web.Mvc.ControllerBase.Execute(RequestContext requestContext) +39 System.Web.Mvc.ControllerBase.System.Web.Mvc.IController.Execute(RequestContext requestContext) +7 System.Web.Mvc.<c_DisplayClass8.b_4() +34 System.Web.Mvc.Async.<c_DisplayClass1.b_0() +21 System.Web.Mvc.Async.<c_DisplayClass81.b_7(IAsyncResult _) +12 System.Web.Mvc.Async.WrappedAsyncResult1.End() +59 System.Web.Mvc.MvcHandler.EndProcessRequest(IAsyncResult asyncResult) +44 System.Web.Mvc.MvcHandler.System.Web.IHttpAsyncHandler.EndProcessRequest(IAsyncResult result) +7 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +8677678 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +155 any ideas guys? Thanks

    Read the article

  • asp.net mvc custom model binder

    - by mike
    pleas help guys, my custom model binder which has been working perfectly has starting giving me errors details below An item with the same key has already been added. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.ArgumentException: An item with the same key has already been added. Source Error: Line 31: { Line 32: string key = bindingContext.ModelName; Line 33: var doc = base.BindModel(controllerContext, bindingContext) as Document; Line 34: Line 35: // DoBasicValidation(bindingContext, doc); Source File: C:\Users\Bich Vu\Documents\Visual Studio 2008\Projects\PitchPortal\PitchPortal.Web\Binders\DocumentModelBinder.cs Line: 33 Stack Trace: [ArgumentException: An item with the same key has already been added.] System.ThrowHelper.ThrowArgumentException(ExceptionResource resource) +51 System.Collections.Generic.Dictionary2.Insert(TKey key, TValue value, Boolean add) +7462172 System.Linq.Enumerable.ToDictionary(IEnumerable1 source, Func2 keySelector, Func2 elementSelector, IEqualityComparer1 comparer) +270 System.Linq.Enumerable.ToDictionary(IEnumerable1 source, Func2 keySelector, IEqualityComparer1 comparer) +102 System.Web.Mvc.ModelBindingContext.get_PropertyMetadata() +157 System.Web.Mvc.DefaultModelBinder.BindProperty(ControllerContext controllerContext, ModelBindingContext bindingContext, PropertyDescriptor propertyDescriptor) +158 System.Web.Mvc.DefaultModelBinder.BindProperties(ControllerContext controllerContext, ModelBindingContext bindingContext) +90 System.Web.Mvc.DefaultModelBinder.BindComplexElementalModel(ControllerContext controllerContext, ModelBindingContext bindingContext, Object model) +50 System.Web.Mvc.DefaultModelBinder.BindComplexModel(ControllerContext controllerContext, ModelBindingContext bindingContext) +1048 System.Web.Mvc.DefaultModelBinder.BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext) +280 PitchPortal.Web.Binders.documentModelBinder.BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext) in C:\Users\Bich Vu\Documents\Visual Studio 2008\Projects\PitchPortal\PitchPortal.Web\Binders\DocumentModelBinder.cs:33 System.Web.Mvc.ControllerActionInvoker.GetParameterValue(ControllerContext controllerContext, ParameterDescriptor parameterDescriptor) +257 System.Web.Mvc.ControllerActionInvoker.GetParameterValues(ControllerContext controllerContext, ActionDescriptor actionDescriptor) +109 System.Web.Mvc.ControllerActionInvoker.InvokeAction(ControllerContext controllerContext, String actionName) +314 System.Web.Mvc.Controller.ExecuteCore() +105 System.Web.Mvc.ControllerBase.Execute(RequestContext requestContext) +39 System.Web.Mvc.ControllerBase.System.Web.Mvc.IController.Execute(RequestContext requestContext) +7 System.Web.Mvc.<c_DisplayClass8.b_4() +34 System.Web.Mvc.Async.<c_DisplayClass1.b_0() +21 System.Web.Mvc.Async.<c__DisplayClass81.<BeginSynchronous>b__7(IAsyncResult _) +12 System.Web.Mvc.Async.WrappedAsyncResult1.End() +59 System.Web.Mvc.MvcHandler.EndProcessRequest(IAsyncResult asyncResult) +44 System.Web.Mvc.MvcHandler.System.Web.IHttpAsyncHandler.EndProcessRequest(IAsyncResult result) +7 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +8677678 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +155 any ideas guys? Thanks

    Read the article

  • Is SQL Azure a newbies springboard?

    - by jamiet
    Earlier today I was considering the various SQL Server platforms that are available today and I wondered aloud, wonder how long until the majority of #sqlserver newcomers use @sqlazure instead of installing locally Let me explain. My first experience of development was way back in the early 90s when I would crank open VBA in Access or Excel and start hammering out some code, usually by recording macros and looking at the code that they produced (sound familiar?). The reason was simple, Office was becoming ubiquitous so the barrier to entry was incredibly low and, save for a short hiatus at university, I’ve been developing on the Microsoft platform ever since. These days spend most of my time using SQL Server. I take a look at SQL Azure today I see a lot of similarities with those early experiences, the barrier to entry is low and getting lower. I don’t have to download some software or actually install anything other than a web browser in order to get myself a fully functioning SQL Server  database against which I can ostensibly start hammering out some code and I believe that to be incredibly empowering. Having said that there are still a few pretty high barriers, namely: I need to get out my credit card Its pretty useless without some development tools such as SQL Server Management Studio, which I do have to install. The second of those barriers will disappear pretty soon when Project Houston delivers a web-based admin and presentation tool for SQL Azure so that just leaves the matter of my having to use a credit card. If Microsoft have any sense at all then they will realise the huge potential of opening up a free, throttled version of SQL Azure for newbies to party on; they get to developers early (just like they did with me all those years ago) and it gives potential customers an opportunity to try-before-they-buy. Perhaps in 20 years time people will be talking about SQL Azure as being their first foray into the world of coding! @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Connect to running web role on Azure using Remote Desktop Connection and VS2012

    - by Magnus Karlsson
    We want to be able to collect IntelliTrace information from our running app and also use remote desktop to connect to the IIS and look around(probably debugging). 1. Create certificate 1.1 Right-click the cloud project (marked in red) and select “Configure remote desktop”. 1.2 In the drop down list of certificates, choose <create> at the bottom. 1.3. Follow the instructions, you can set it up with default values. 1.4 When done. Choose the certificate and click “Copy to File…” as seen in the left of the picture above. 1.5. Save the file with any name you want. Now we will save it to local storage to be able to import it to our solution through the azure configuration manager in step 3. 2. Save certificate to local storage Now we need to attach it to our local certificate storage to be able to reach it from our confiuguration manager in visual studio. Microsoft provides the following steps for doing this: http://support.microsoft.com/kb/232137 In order to view the Certificates store on the local computer, perform the following steps: Click Start, and then click Run. Type "MMC.EXE" (without the quotation marks) and click OK. Click Console in the new MMC you created, and then click Add/Remove Snap-in. In the new window, click Add. Highlight the Certificates snap-in, and then click Add. Choose the Computer option and click Next. Select Local Computer on the next screen, and then click OK. Click Close , and then click OK. You have now added the Certificates snap-in, which will allow you to work with any certificates in your computer's certificate store. You may want to save this MMC for later use. Now that you have access to the Certificates snap-in, you can import the server certificate into you computer's certificate store by following these steps: Open the Certificates (Local Computer) snap-in and navigate to Personal, and then Certificates. Note: Certificates may not be listed. If it is not, that is because there are no certificates installed. Right-click Certificates (or Personal if that option does not exist.) Choose All Tasks, and then click Import. When the wizard starts, click Next. Browse to the PFX file you created containing your server certificate and private key. Click Next. Enter the password you gave the PFX file when you created it. Be sure the Mark the key as exportable option is selected if you want to be able to export the key pair again from this computer. As an added security measure, you may want to leave this option unchecked to ensure that no one can make a backup of your private key. Click Next, and then choose the Certificate Store you want to save the certificate to. You should select Personal because it is a Web server certificate. If you included the certificates in the certification hierarchy, it will also be added to this store. Click Next. You should see a summary of screen showing what the wizard is about to do. If this information is correct, click Finish. You will now see the server certificate for your Web server in the list of Personal Certificates. It will be denoted by the common name of the server (found in the subject section of the certificate). Now that you have the certificate backup imported into the certificate store, you can enable Internet Information Services 5.0 to use that certificate (and the corresponding private key). To do this, perform the following steps: Open the Internet Services Manager (under Administrative Tools) and navigate to the Web site you want to enable secure communications (SSL/TLS) on. Right-click on the site and click Properties. You should now see the properties screen for the Web site. Click the Directory Security tab. Under the Secure Communications section, click Server Certificate. This will start the Web Site Certificate Wizard. Click Next. Choose the Assign an existing certificate option and click Next. You will now see a screen showing that contents of your computer's personal certificate store. Highlight your Web server certificate (denoted by the common name), and then click Next. You will now see a summary screen showing you all the details about the certificate you are installing. Be sure that this information is correct or you may have problems using SSL or TLS in HTTP communications. Click Next, and then click OK to exit the wizard. You should now have an SSL/TLS-enabled Web server. Be sure to protect your PFX files from any unwanted personnel. Image of a typical MMC.EXE with the certificates up.   3. Import the certificate to you visual studio project. 3.1 Now right click your equivalent to the MvcWebRole1 (as seen in the first picture under the red oval) and choose properties. 3.2 Choose Certificates. Right click the ellipsis to the right of the “thumbprint” and you should be able to select your newly created certificate here. After selecting it- save the file.   4. Upload the certificate to your Azure subscription. 4.1 Go to the azure management portal, click the services menu icon to the left and choose the service. Click Upload in the bottom menu.     5. Connect to server. Since I tried to use account settings(have to use another name) we have to set up a new name for the connection. No biggie. 5.1 Go to azure management portal, select your service and in the bottom menu, choose “REMOTE”. This will display the configuration for remote connection. It will actually change your ServiceConfiguration.cscfg file. After you change It here it might be good to choose download and replace the one in your project. Set a name that is not your windows azure account name and not Administrator. 5.2 Goto visual studio, click Server Explorer. Choose as selected in the picture below and click “COnnect using remote desktop”.   5.2 You will now be able to log in with the name and password set up in step 5.1. and voila! Windows server 2012, IIS and other nice stuff!   To do this one I’ve been using http://msdn.microsoft.com/en-us/library/windowsazure/ff683671.aspx where you can collect some of this information and additional one.

    Read the article

  • Is there a product planning tool that has these specific features? [closed]

    - by acjohnson55
    I am working on a web startup in the early stages, and we are struggling a bit to manage the scope and scheduling of our product. We have loads of high-level features in the pipeline, but we need a good way of scheduling them for release iterations and breaking them into actual tasks that can be scheduled (that could be a separate tool, but integration would be preferred). I would say that our product can be pretty cleanly divided into "aspects", and we want to be able to separate features by the aspect to which they apply. Perhaps most importantly, it should be really simple to create and move features between target release points. We don't have physical space for a war room type setup, so whatever we settle upon should ideally have a cloud-type web interface. Right now, we're using Excel to make a grid of product aspects vs. target releases, and we store features at the intersections. But this is not providing a good way of indexing tasks to those features or being able to move them around. I would much rather have something that automates the grid overview. I'm less interested in something that helps with low-level scheduling than I am in something that is good at organizing the product plan at the long-term, high-level view. Is there a product planning tool out there that matches these specifications?

    Read the article

  • Configuration file 'C:\my\App.Config' is being used to configure all executables

    - by Taylor Leese
    I have a Visual Studio setup project that installs an application into the task scheduler and also installs a GUI application to manage some configuration parameters in the registry. This being the case, the setup project installs two different primary outputs (.exe's) as part of the process. I am getting the following warning when I rebuild the setup project: Configuration file 'C:\my\App.Config' is being used to configure all executables Is there any way to remove this warning? The suggested MSFT solution apears to be to use a different setup project for each .exe, but I only want the users to have to run one installer. Any suggestions?

    Read the article

  • Why can't I build Deluge?

    - by hugemeow
    Deluge is a BitTorrent Client. I am trying to build it from source, since I don't have privilege to install it as root. I am using python setup.py build. But, it failed following message, why? copying deluge/ui/web/themes/images/gray/slider/slider-v-thumb.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/gray/slider copying deluge/ui/web/themes/images/gray/slider/slider-thumb.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/gray/slider copying deluge/ui/web/themes/images/gray/panel/top-bottom.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/gray/panel copying deluge/ui/web/themes/images/gray/tabs/tab-strip-bg.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/gray/tabs copying deluge/ui/web/themes/images/yourtheme/window/right-corners.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/yourtheme/window copying deluge/ui/web/themes/images/yourtheme/window/left-corners.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/yourtheme/window copying deluge/ui/web/themes/images/yourtheme/window/left-right.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/yourtheme/window copying deluge/ui/web/themes/images/yourtheme/window/top-bottom.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/yourtheme/window creating build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/yourtheme/slider copying deluge/ui/web/themes/images/yourtheme/slider/slider-v-thumb.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/yourtheme/slider copying deluge/ui/web/themes/images/yourtheme/slider/slider-thumb.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/yourtheme/slider copying deluge/ui/web/themes/images/yourtheme/slider/slider-bg.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/yourtheme/slider copying deluge/ui/web/themes/images/yourtheme/slider/slider-v-bg.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/yourtheme/slider copying deluge/ui/web/themes/images/yourtheme/panel/top-bottom.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/yourtheme/panel copying deluge/ui/web/themes/images/yourtheme/grid/hmenu-lock.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/yourtheme/grid copying deluge/ui/web/themes/images/yourtheme/grid/hmenu-unlock.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/yourtheme/grid copying deluge/ui/web/themes/images/yourtheme/tabs/tab-strip-bg.png -> build/lib.linux-x86_64-2.4/deluge/ui/web/themes/images/yourtheme/tabs running build_ext building 'libtorrent' extension gcc -pthread -shared -L/usr/lib64 -L/opt/local/lib -lboost_filesystem -lboost_date_time -lboost_iostreams -lboost_python -lboost_thread -lpthread -lssl -lz -o build/lib.linux-x86_64-2.4/deluge/libtorrent.so /usr/bin/ld: cannot find -lboost_filesystem collect2: ld returned 1 exit status error: command 'gcc' failed with exit status 1 [mirror@innov deluge-1.3.5]$ echo $? 1 Edit 1: gcc version and os information $(which gcc) --version gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-52) Copyright (C) 2006 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. cat /etc/issue CentOS release 5.7 (Final) Kernel \r on an \m Edit 2: boost is referenced by setup.py in deluge 114 if OS == "linux": 115 if os.path.exists(os.path.join(sysconfig.get_config_vars()['LIBDIR'], \ 116 'libboost_filesystem-mt.so')): 117 boost_filesystem = "boost_filesystem-mt" 118 elif os.path.exists(os.path.join(sysconfig.get_config_vars()['LIBDIR'], \ 119 'libboost_filesystem.so')): 120 boost_filesystem = "boost_filesystem" 121 if os.path.exists(os.path.join(sysconfig.get_config_vars()['LIBDIR'], \ 122 'libboost_date_time-mt.so')): 123 boost_date_time = "boost_date_time-mt" 124 elif os.path.exists(os.path.join(sysconfig.get_config_vars()['LIBDIR'], \ 125 'libboost_date_time.so')): 126 boost_date_time = "boost_date_time" 127 if os.path.exists(os.path.join(sysconfig.get_config_vars()['LIBDIR'], \ 128 'libboost_thread-mt.so')): 129 boost_thread = "boost_thread-mt" 130 elif os.path.exists(os.path.join(sysconfig.get_config_vars()['LIBDIR'], \ 131 'libboost_thread.so')): 132 boost_thread = "boost_thread" 133 134 if 'boost_filesystem' not in vars(): 135 boost_filesystem = "boost_filesystem-mt" 136 if 'boost_date_time' not in vars(): 137 boost_date_time = "boost_date_time-mt" 138 if 'boost_thread' not in vars(): 139 boost_thread = "boost_thread-mt" 140 141 elif OS == "freebsd": 142 boost_filesystem = "boost_filesystem" 143 boost_date_time = "boost_date_time" 144 boost_thread = "boost_thread" 145 else: 146 boost_filesystem = "boost_filesystem-mt" 147 boost_date_time = "boost_date_time-mt" 148 boost_thread = "boost_thread-mt" 149 150 librariestype = [boost_filesystem, boost_date_time, 151 boost_thread, 'z', 'pthread', 'ssl', 'crypto']

    Read the article

  • iptables (NAT/PAT) setup for SSH & Samba

    - by IanVaughan
    I need to access a Linux box via SSH & Samba that is hidden/connected behind another one. Setup :- A switch B C |----| |---| |----| |----| |eth0|----| |----|eth0| | | |----| |---| |eth1|----|eth1| |----| |----| Eg, SSH/Samba from A to C How does one go about this? I was thinking that it cannot be done via IP alone? Or can it? Could B say "hi on eth0, if your looking for 192.168.0.2, its here on eth1"? Is this NAT? This is a large private network, so what about if another PC has that IP?! More likely it would be PAT? A would say "hi 192.168.109.15:1234" B would say "hi on eth0, traffic for port 1234 goes on here eth1" How could that be done? And would the SSH/Samba demons see the correct packet header info and work?? IP info :- A - eth0 - 192.168.109.2 B - eth0 - B1 = 192.168.109.15 B2 = 172.24.40.130 - eth1 - 192.168.0.1 C - eth1 - 192.168.0.2 A, B & C are RHEL (RedHat) But Windows computers can be connected to the switch. I configured the 192.168.0.* IPs, they are changeable. Update after response from Eddie Few problems (and Machines' B IP is different!) From A :- ssh 172.24.40.130 works ok, (can get to B2) but ssh 172.24.40.130 -p 2022 -vv times out with :- OpenSSH_4.3p2, OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to 172.24.40.130 [172.24.40.130] port 2022. ...wait ages... debug1: connect to address 172.24.40.130 port 2022: Connection timed out ssh: connect to host 172.24.40.130 port 2022: Connection timed out From B2 :- $ service iptables status Table: filter Chain INPUT (policy ACCEPT) num target prot opt source destination Chain FORWARD (policy ACCEPT) num target prot opt source destination 1 ACCEPT tcp -- 0.0.0.0/0 192.168.0.2 tcp dpt:22 Chain OUTPUT (policy ACCEPT) num target prot opt source destination Table: nat Chain PREROUTING (policy ACCEPT) num target prot opt source destination 1 DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:2022 to:192.168.0.2:22 Chain POSTROUTING (policy ACCEPT) num target prot opt source destination Chain OUTPUT (policy ACCEPT) num target prot opt source destination And ssh from B2 to C works fine :- $ ssh 192.168.0.2 Route info :- $ route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.0.0 * 255.255.255.0 U 0 0 0 eth1 172.24.40.0 * 255.255.255.0 U 0 0 0 eth0 169.254.0.0 * 255.255.0.0 U 0 0 0 eth1 default 172.24.40.1 0.0.0.0 UG 0 0 0 eth0 $ ip route 192.168.0.0/24 dev eth1 proto kernel scope link src 192.168.0.1 172.24.40.0/24 dev eth0 proto kernel scope link src 172.24.40.130 169.254.0.0/16 dev eth1 scope link default via 172.24.40.1 dev eth0 So I just dont know why the port forward doesnt work from A to B2?

    Read the article

  • Triple monitor setup in linux

    - by Brendan Abel
    I'm hoping there are some xorg gurus out there. I'm trying to get a three monitor setup working in linux. I have 2 lcd monitors and a tv, all different resolutions. I'm using 2 video cards; a 9800 GTX and 7900Gt. I've seen a lot of different posts about people trying to make this work, and in every case, they either gave up, or Xinerama magically solved all their problems. Basically, my main problem is that I cannot get Xinerama to work. Every time I turn it on in the options, my machine gets stuck in a neverending boot cycle. If I disable Xinerama, I just have three Xorg screens, but I can't drag windows from one to the other. I can get the 2 lcds on Twinview, and the tv on a separate Xorg screen no problem. But I don't really like this solution. I'd rather have them all on separate screens and stitch them together with Xinerama. Has anyone done this? Here's my xorg.conf for reference. p.s. This took me all of 30 seconds to set up in Windows XP! p.s.s. I've seen somewhere that maybe randr can solve my problems? But I'm not quite sure how? Section "Monitor" Identifier "Main1" VendorName "Acer" ModelName "H233H" HorizSync 40-70 VertRefresh 60 Option "dpms" EndSection #Section "Monitor" # Identifier "Main2" # VendorName "Acer" # ModelName "AL2216W" # HorizSync 40-70 # VertRefresh 60 # Option "dpms" #EndSection Section "Monitor" Identifier "Projector" VendorName "BenQ" ModelName "W500" HorizSync 44.955-45 VertRefresh 59.94-60 Option "dpms" EndSection Section "Device" Identifier "Card1" Driver "nvidia" VendorName "nvidia" BusID "PCI:5:0:0" BoardName "nVidia Corporation G92 [GeForce 9800 GTX+]" Option "ConnectedMonitor" "DFP,DFP" Option "NvAGP" "0" Option "NoLogo" "True" #Option "TVStandard" "HD720p" EndSection Section "Device" Identifier "Card2" Driver "nvidia" VendorName "nvidia" BusID "PCI:4:0:0" BoardName "nVidia Corporation G71 [GeForce 7900 GT/GTO]" Option "NvAGP" "0" Option "NoLogo" "True" Option "TVStandard" "HD720p" EndSection Section "Module" Load "glx" EndSection Section "Screen" Identifier "ScreenMain-0" Device "Card1-0" Monitor "Main1" DefaultDepth 24 Option "Twinview" Option "TwinViewOrientation" "RightOf" Option "MetaModes" "DFP-0: 1920x1080; DFP-1: 1680x1050" Option "HorizSync" "DFP-0: 40-70; DFP-1: 40-70" Option "VertRefresh" "DFP-0: 60; DFP-1: 60" #SubSection "Display" # Depth 24 # Virtual 4880 1080 #EndSubSection EndSection Section "Screen" Identifier "ScreenProjector" Device "Card2" Monitor "Projector" DefaultDepth 24 Option "MetaModes" "TV-0: 1280x720" Option "HorizSync" "TV-0: 44.955-45" Option "VertRefresh" "TV-0: 59.94-60" EndSection Section "ServerLayout" Identifier "BothTwinView" Screen "ScreenMain-0" Screen "ScreenProjector" LeftOf "ScreenMain-0" #Option "Xinerama" "on" # most important option let you window expand to three monitors EndSection

    Read the article

  • Trying to setup postfix

    - by Frexuz
    I used this guide: http://jonsview.com/how-to-setup-email-services-on-ubuntu-using-postfix-tlssasl-and-dovecot telnet localhost 25 says 220 episodecalendar.com ESMTP Postfix (Ubuntu) ehlo localhost 250-episodecalendar.com 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-STARTTLS 250-AUTH LOGIN PLAIN 250-AUTH=LOGIN PLAIN 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN Installation seems fine? /var/log/mail.log says Nov 26 14:04:06 ubuntu postfix/pickup[12107]: A742E2B9E1: uid=0 from=<root> Nov 26 14:04:06 ubuntu postfix/cleanup[12114]: A742E2B9E1: message-id=<[email protected]> Nov 26 14:04:06 ubuntu postfix/qmgr[12108]: A742E2B9E1: from=<[email protected]>, size=300, nrcpt=1 (queue active) Nov 26 14:04:06 ubuntu postfix/local[12115]: A742E2B9E1: to=<[email protected]>, relay=local, delay=3.3, delays=3.3/0/0/$ Nov 26 14:04:06 ubuntu postfix/cleanup[12114]: AD2662B9E0: message-id=<[email protected]> Nov 26 14:04:06 ubuntu postfix/qmgr[12108]: AD2662B9E0: from=<>, size=2087, nrcpt=1 (queue active) Nov 26 14:04:06 ubuntu postfix/bounce[12117]: A742E2B9E1: sender non-delivery notification: AD2662B9E0 Nov 26 14:04:06 ubuntu postfix/local[12115]: AD2662B9E0: to=<[email protected]>, relay=local, delay=0.02, delays=0.01/0/0/0$ Nov 26 14:04:06 ubuntu postfix/qmgr[12108]: AD2662B9E0: removed Nov 26 14:04:06 ubuntu postfix/qmgr[12108]: A742E2B9E1: removed I'm not really understanding the log file, and obviously I'm not getting any emails. Right now I'm running Ubuntu on a Virtualbox (development box). Is that a problem? The internet connection works fine on it. What about domains etc..? edit: /etc/postfix/main.cf # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file = /etc/postfix/ssl/smtpd.crt smtpd_tls_key_file = /etc/postfix/ssl/smtpd.key smtpd_use_tls = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file = /etc/postfix/ssl/smtpd.crt smtpd_tls_key_file = /etc/postfix/ssl/smtpd.key smtpd_use_tls = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client.

    Read the article

  • Linux Fiber Channel Host Setup Basic

    - by Jim
    I've been googling for about 4 hours now with no luck. I am trying to setup a Linux server running Oracle Server 6.3 as a Fiber Channel host. And then connect it to a Dell Compellent Fibre Channel Host contain a 500GB Volume. The Oracle server itself contains two Brocade 815 FC HBAs. I've discovered their WWN(I think) via cat /sys/class/fc_host/host1/port_name 0x100000051efc3d85 cat /sys/class/fc_host/host2/port_name 0x100000051efc3d9f The next part is where I am at a loss. I've used iSCSI before...is FC the same deal where you have an initiator and a target? If so where do I specific that in linux? I'm also new to Fiber Channel as a protocol, so i am unsure what is needed to make a transaction? WWN and port ID? Similar to IP:Port combination in the Ethernet world. I've read alot regarding using systool, multipath, fc_transport commands, however none of these is recognized as a valid command from Oracle Server 6.3 Appreciate the guidance and assistance. I installed sccsi-target-utils and can now run rescan-scsi-bus and sg_map -x. rescan-scsi-bus.sh -l -w -r Host adapter 0 (megaraid_sas) found. Host adapter 1 ((null)) found. Host adapter 2 ((null)) found. Host adapter 3 (ata_piix) found. Host adapter 4 (ata_piix) found. Scanning SCSI subsystem for new devices and remove devices that have disappeared Scanning host 0 for SCSI target IDs 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15, LUNs 0 1 2 3 4 5 6 7 Scanning for device 0 2 0 0 .... OLD: Host: scsi0 Channel: 02 Id: 00 Lun: 00 Vendor: DELL Model: PERC H700 Rev: 2.30 Type: Direct-Access ANSI SCSI revision: 05 Scanning for device 0 2 1 0 ... OLD: Host: scsi0 Channel: 02 Id: 01 Lun: 00 Vendor: DELL Model: PERC H700 Rev: 2.30 Type: Direct-Access ANSI SCSI revision: 05 Scanning host 1 for all SCSI target IDs, LUNs 0 1 2 3 4 5 6 7 Scanning for device 1 0 3 1 ... OLD: Host: scsi1 Channel: 00 Id: 03 Lun: 01 Vendor: COMPELNT Model: Compellent Vol Rev: 0505 Type: Direct-Access ANSI SCSI revision: 05 Scanning host 2 for all SCSI target IDs, LUNs 0 1 2 3 4 5 6 7 Scanning host 3 for all SCSI target IDs, LUNs 0 1 2 3 4 5 6 7 Scanning for device 3 0 0 0 ... REM: Host: scsi3 Channel: 00 Id: 00 Lun: 00 DEL: Vendor: TEAC Model: DVD-ROM DV-28SW Rev: R.2A Type: CD-ROM ANSI SCSI revision: 05 Scanning host 4 channels 0 for SCSI target IDs 0, LUNs 0 1 2 3 4 5 6 7 0 new device(s) found. 1 device(s) removed. and sg_map -x /dev/sg0 0 0 32 0 13 /dev/sg1 0 2 0 0 0 /dev/sda /dev/sg2 0 2 1 0 0 /dev/sdb /dev/sg4 1 0 3 1 0 /dev/sdc I'm not sure what this all means...

    Read the article

  • "Hostile" network in the company - please comment on a security setup

    - by TomTom
    I have a little specific problem here that I want (need) to solve in a satisfactory way. My company has multiple (IPv4) networks that are controlled by our router sitting in the middle. Typical smaller shop setup. There is now one additional network that has an IP Range OUTSIDE of our control, connected to the internet with another router OUTSIDE of our control. Call it a project network that is part of another companies network and combined via VPN they set up. This means: They control the router that is used for this network and They can reconfigure things so that they can access the machines in this network. The network is physically split on our end through some VLAN capable switches as it covers three locations. At one end there is the router the other company controls. I Need / want to give the machines used in this network access to my company network. In fact, it may be good to make them part of my active directory domain. The people working on those machines are part of my company. BUT - I need to do so without compromising the security of my company network from outside influence. Any sort of router integration using the externally controlled router is out by this idea So, my idea is this: We accept the IPv4 address space and network topology in this network is not under our control. We seek alternatives to integrate those machines into our company network. The 2 concepts I came up with are: Use some sort of VPN - have the machines log into VPN. Thanks to them using modern windows, this could be transparent DirectAccess. This essentially treats the other IP space not different than any restaurant network a laptop of the company goes in. Alternatively - establish IPv6 routing to this ethernet segment. But - and this is a trick - block all IPv6 packets in the switch before they hit the third party controlled router, so that even IF they turn on IPv6 on that thing (not used now, but they could do it) they would get not a single packet. The switch can nicely do that by pulling all IPv6 traffic coming to that port into a separate VLAN (based on ethernet protocol type). Anyone sees a problem with using he switch to isolate the outer from IPv6? Any security hole? It is sad we have to treat this network as hostile - would be a lot easier - but the support personnel there is of "known dubious quality" and the legal side is clear - we can not fulfill our obligations when we integrate them into our company while they are under a jurisdiction we don't have a say in.

    Read the article

  • spring web application context is not loaded from jar file in WEB-INF/lib when running tomcat in eclipse

    - by Remy J
    I am experimenting with spring, maven, and eclipse but stumbling on a weird issue. I am running Eclipse Helios SR1 with the STS (Spring tools suite) plugin which includes the Maven plugin also. What i want to achieve is a spring mvc webapp which uses an application context loaded from a local application context xml file, but also from other application contexts in jar files dependencies included in WEB-INF/lib. What i'd ultimately like to do is have my persistence layer separated in its own jar file but containing its own spring context files with persistence specific configuration (e.g a jpa entityManagerFactory for example). So to experiment with loading resources from jar dependencies, i created a simple maven project from eclipse, which defines an applicationContext.xml file in src/main/resources Inside, i define a bean <bean id="mybean" class="org.test.MyClass" /> and create the class in the org.test package I run mvn-install from eclipse, which generates me a jar file containing my class and the applicationContext.xml file: testproj.jar |_META-INF |_org |_test |_MyClass.class |_applicationContext.xml I then create a spring mvc project from the Spring template projects provided by STS. I have configured an instance of Tomcat 7.0.8 , and also an instance of springSource tc Server within eclipse. Deploying the newly created project on both servers works without problem. I then add my previous project as a maven dependency of the mvc project. the jar file is correctly added in the Maven Dependencies of the project. In the web.xml that is generated, i now want to load the applicationContext.xml from the jar file as well as the existing one generated for the project. My web.xml now looks like this: org.springframework.web.context.ContextLoaderListener <!-- Processes application requests --> <servlet> <servlet-name>appServlet</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <init-param> <param-name>contextConfigLocation</param-name> <param-value> classpath*:applicationContext.xml, /WEB-INF/spring/appServlet/servlet-context.xml </param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>appServlet</servlet-name> <url-pattern>/</url-pattern> </servlet-mapping> Also, in my servlet-context.xml, i have the following: <context:component-scan base-package="org.test" /> <context:component-scan base-package="org.remy.mvc" /> to load classes from the jar spring context (org.test) and to load controllers from the mvc app context. I also change one of my controllers in org.remy.mvc to autowire MyClass to verify that loading the context has worked as intended. public class MyController { @Autowired private MyClass myClass; public void setMyClass(MyClass myClass) { this.myClass = myClass; } public MyClass getMyClass() { return myClass; } [...] } Now this is the weird bit: If i deploy the spring mvc web on my tomcat instance inside eclipse (run on server...) I get the following error : org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping#0': Initialization of bean failed; nested exception is java.lang.NoClassDefFoundError: org/test/MyClass at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:527) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:291) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:288) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:190) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:580) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:895) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:425) at org.springframework.web.servlet.FrameworkServlet.createWebApplicationContext(FrameworkServlet.java:442) at org.springframework.web.servlet.FrameworkServlet.createWebApplicationContext(FrameworkServlet.java:458) at org.springframework.web.servlet.FrameworkServlet.initWebApplicationContext(FrameworkServlet.java:339) at org.springframework.web.servlet.FrameworkServlet.initServletBean(FrameworkServlet.java:306) at org.springframework.web.servlet.HttpServletBean.init(HttpServletBean.java:127) at javax.servlet.GenericServlet.init(GenericServlet.java:160) at org.apache.catalina.core.StandardWrapper.initServlet(StandardWrapper.java:1133) at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1087) at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:996) at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:4834) at org.apache.catalina.core.StandardContext$3.call(StandardContext.java:5155) at org.apache.catalina.core.StandardContext$3.call(StandardContext.java:5150) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907) at java.lang.Thread.run(Thread.java:619) Caused by: java.lang.NoClassDefFoundError: org/test/MyClass at java.lang.Class.getDeclaredMethods0(Native Method) at java.lang.Class.privateGetDeclaredMethods(Class.java:2427) at java.lang.Class.getDeclaredMethods(Class.java:1791) at org.springframework.util.ReflectionUtils.doWithMethods(ReflectionUtils.java:446) at org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping.determineUrlsForHandlerMethods(DefaultAnnotationHandlerMapping.java:172) at org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping.determineUrlsForHandler(DefaultAnnotationHandlerMapping.java:118) at org.springframework.web.servlet.handler.AbstractDetectingUrlHandlerMapping.detectHandlers(AbstractDetectingUrlHandlerMapping.java:79) at org.springframework.web.servlet.handler.AbstractDetectingUrlHandlerMapping.initApplicationContext(AbstractDetectingUrlHandlerMapping.java:58) at org.springframework.context.support.ApplicationObjectSupport.initApplicationContext(ApplicationObjectSupport.java:119) at org.springframework.web.context.support.WebApplicationObjectSupport.initApplicationContext(WebApplicationObjectSupport.java:72) at org.springframework.context.support.ApplicationObjectSupport.setApplicationContext(ApplicationObjectSupport.java:73) at org.springframework.context.support.ApplicationContextAwareProcessor.invokeAwareInterfaces(ApplicationContextAwareProcessor.java:106) at org.springframework.context.support.ApplicationContextAwareProcessor.postProcessBeforeInitialization(ApplicationContextAwareProcessor.java:85) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsBeforeInitialization(AbstractAutowireCapableBeanFactory.java:394) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1413) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:519) ... 25 more 20-Feb-2011 10:54:53 org.apache.catalina.core.ApplicationContext log SEVERE: StandardWrapper.Throwable org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping#0': Initialization of bean failed; nested exception is java.lang.NoClassDefFoundError: org/test/MyClass If i build the war file (using maven "install" goal), and then deploy that war file in the webapps directory of a standalone tomcat server (7.0.8 as well) it WORKS :-( What am i missing ? Thanks for the help.

    Read the article

  • ruby on rails gitorious setup on ubuntu

    - by dogmatic69
    Ive been trying to install gitorious for a while now which required ruby and rails etc. Ive finally got rails pages serving but cant finish the installation of gitorious because the gem version is too new. the error logs say please run 'rake ultrasphinx:configure' and that gives rake ultrasphinx:configure (in /var/www/apps/gitorious) rake aborted! uninitialized constant ActiveSupport::Dependencies::Mutex /var/www/apps/gitorious/Rakefile:10:in `require' (See full trace by running task with --trace) From searching google this is beacuse of the wrong gem verison. Cant find a way to down grade it. apparently sudo gem update --system 1.4.2 should do the trick but ubuntu 10.10 does not like this. ERROR: While executing gem ... (RuntimeError) gem update --system is disabled on Debian, because it will overwrite the content of the rubygems Debian package, and might break your Debian system in subtle ways. The Debian-supported way to update rubygems is through apt-get, using Debian official repositories. If you really know what you are doing, you can still update rubygems by setting the REALLY_GEM_UPDATE_SYSTEM environment variable, but please remember that this is completely unsupported by Debian. So i added export REALLY_GEM_UPDATE_SYSTEM=1 to .bashrc and reloaded it with . ~/.bashrc and still the same. ive tried various forms of setting this environmental variable with no luck. Ive also been told on #gitorious irc channel to add the file config/initializers/rubygems.rb with the line require "thread" to it. This has done nothing. EDIT: Just found another way which was rvm install rubygems 1.4.2 and it gave: Removing old Rubygems files... Installing rubygems dedicated to ruby-1.8.7-p334... Retrieving rubygems-1.4.2 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 288k 100 288k 0 0 282k 0 0:00:01 0:00:01 --:--:-- 414k Extracting rubygems-1.4.2 ... Installing rubygems for /home/ubuntu/.rvm/rubies/ruby-1.8.7-p334/bin/ruby ERROR: Error running 'GEM_PATH="/home/ubuntu/.rvm/gems/ruby-1.8.7-p334:/home/ubuntu/.rvm/gems/ruby-1.8.7-p334@global:/home/ubuntu/.rvm/gems/ruby-1.8.7-p334@global" GEM_HOME="/home/ubuntu/.rvm/gems/ruby-1.8.7-p334" "/home/ubuntu/.rvm/rubies/ruby-1.8.7-p334/bin/ruby" "/home/ubuntu/.rvm/src/rubygems-1.4.2/setup.rb"', please read /home/ubuntu/.rvm/log/ruby-1.8.7-p334/rubygems.install.log WARN: Installation of rubygems did not complete successfully. TL;DR please tell me how to downgrade rubygems on ubuntu 10.10 or upgrade gitorious to work with 1.6.2 gems

    Read the article

  • Best RAID setup for multimedia fileserver?

    - by Mr. Schwabe
    I'm building a fileserver for my small office. We do film and multimedia design. Only 3 clients connected. The server is primarily for local access to graphic assets and video files. I'm looking for advice on hardware and software required. Particularly for the RAID. I have the following objectives: A) merged capacity I'd like all other systems to access the data as a single mapped network drive that has an initial capacity of 10 TB. So perhaps 5x 2TB drives (plus mirror drives for redundancy). B) easy way to increase capacity Thinking long term, I'd like to 'easily' add more drives to the array for a potential two or three fold increase in capacity. So theoretically it could get upto a 30 TB raid array consisting of maybe 15x 2 TB drives of capacity (plus mirror drives for redundancy). C) maximum fault tolerance I want at least 1 mirror drive per capacity drive (in laymen's terms). So if I start with 10 TB / 5x 2TB of capacity, I suppose I would need another another 5x 2TB drives to be mirrors. So 10 drives total. But I'd also like potential for even more redundancy; with upto 2 additional mirrors per 'capacity drive' (and to be able to add them to the array anytime with ease). D) easy way to monitor drive health I'd like an intuitive interface for managing the raid and monitoring drive health The other systems accessing this network drive will be running Windows, but also the odd Ubuntu and MacOS system as well. Are these objectives attainable? What type of RAID setup do you recommend? What hardware will be required? Also what OS do you think this system should be running? Does it really matter? I'm no network admin - just a long time Windoze user, without much Linux experience. That said, I'm not opposed to a Linux solution if it's easy enough and more practical than a Windows OS for this server. Or maybe something such as Openfiler. Budget should hit the sweet spot for value and performance (hence my preference to use 2TB drives). The biggest focus is storage; aside from that the system just needs to keep the drives running optimally with perhaps 2 or 3 clients accessing / writing files at any given time. The hardware quote would start with something like 10x 2TB WD Caviar Blacks; about $1900 for the storage + $x for remaining parts. http://ncix.com/products/index.php?sku=42775&vpn=WD2001FASS&manufacture=Western%20Digital%20WD Your advice is appreciated, thanks!

    Read the article

  • how can I make pip/setuptools understand that my package is in ./src?

    - by Giacomo Lacava
    I have a library with a layout like this in Github: README setup.py src/ somelibrary.py Note: I cannot change the layout, but I can change setup.py. I want to be able to reference this library from requirements.txt, so that people can do pip install -r requirements.txt and have it installed automagically. So I add a line like this into requirements: -e git+http://blablabla/blabla#egg=somelibrary This will clone the repository under ./src/somelibrary and then run setup.py develop on it, which will just add a link to ./src/somelibrary under site-packages. Unfortunately, because the library is actually under ./src/somelibrary/src, it seems like python can't see the library correctly. What am I missing? I guess it must be a setup.py option I'm not using correctly.

    Read the article

< Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >