Search Results

Search found 45925 results on 1837 pages for 'process start'.

Page 259/1837 | < Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >

  • Daemonize() issues on Debian

    - by djTeller
    Hi, I'm currently writing a multi-process client and a multi-treaded server for some project i have. The server is a Daemon. In order to accomplish that, i'm using the following daemonize() code: static void daemonize(void) { pid_t pid, sid; /* already a daemon */ if ( getppid() == 1 ) return; /* Fork off the parent process */ pid = fork(); if (pid < 0) { exit(EXIT_FAILURE); } /* If we got a good PID, then we can exit the parent process. */ if (pid > 0) { exit(EXIT_SUCCESS); } /* At this point we are executing as the child process */ /* Change the file mode mask */ umask(0); /* Create a new SID for the child process */ sid = setsid(); if (sid < 0) { exit(EXIT_FAILURE); } /* Change the current working directory. This prevents the current directory from being locked; hence not being able to remove it. */ if ((chdir("/")) < 0) { exit(EXIT_FAILURE); } /* Redirect standard files to /dev/null */ freopen( "/dev/null", "r", stdin); freopen( "/dev/null", "w", stdout); freopen( "/dev/null", "w", stderr); } int main( int argc, char *argv[] ) { daemonize(); /* Now we are a daemon -- do the work for which we were paid */ return 0; } I have a strange side effect when testing the server on Debian (Ubuntu). The accept() function always fail to accept connections, the pid returned is -1 I have no idea what causing this, since in RedHat & CentOS it works well. When i remove the call to daemonize(), everything works well on Debian, when i add it back, same accept() error reproduce. I've been monitring the /proc//fd, everything looks good. Something in the daemonize() and the Debian release just doesn't seem to work. (Debian GNU/Linux 5.0, Linux 2.6.26-2-286 #1 SMP) Any idea what causing this? Thank you

    Read the article

  • Web Services, Memory Leaks and CRM

    - by Neil
    Hi, I have a website that allows users to upload a csv file. This calls a service that reads the information from the csv, puts it into DynamicEntity objects and calls the CRM service to Create/Update entities in CRM. When this service creates/updates an entity this kicks off other plugins to apply certain business rules. These rules can also Create or Update entites in CRM. The issue here is that the handle count of the w3wp.exe process that the website is calling increases every time the an entity is created or updated and it never comes back down. I tried putting Garbage Collection code in the business rules and this reduces the handle count of the CRM w3wp process (run by the Network Service), but not the other w3wp process. Should I have Dispose methods on the Web Service that calls the CRM service? I hope that makes sense. I'm not overly familiar with memory management issues so any help is appreciated. Can anybody give me some tips on how to stop this from occurring? Thanks, Neil -- EDIT Okay well the handle count goes up when I call the Service.Create(DynamicEntity) method. I don't think placing any code here would be beneficial. When I exit the method/class/service that contains this call the handle count stays as it is. What I need to know is whether this is something I should be managing or is it something CRM takes care of (or doesn't take care of but I can't do anything about it) -- Another Edit Right this is how it works. 1) We have CRM and its related services 2) We have another service independent of CRM that uses the CRM services (number 1 above) to create entities based on csv info passed into it 3) We have a website that allows a user to upload a csv, and calls service no 2 above to Create/Update entities in CRM 4) We have plugins fired by CRM which use Service 1 above to create/update entities So the user uploads a csv to the website (3), this fires a service(2). When service 2 creates an entity using service 1, Service 4 fires. Service 4 calls also uses service 1 to Create entities, and when these services are called (using the Service.Create() method) the handle count of the process increases. When the method/class/services finish the handle count remains the same, and so when the whole process occurs again the handle count will increased again.

    Read the article

  • MySQL Hibernate sort on 2 columns

    - by sammichy
    I have a table as follows Table item { ID - Primary Key content - String published_date - When the content was published create_date - When this database entry was created } Every hour (or specified time interval) I run a process to update this table with data from different sources (websites). I want to display the results according to the following rules. 1. The entries created each time the process runs should be grouped together. So the entries from the 2nd process run will always be after the entries from the first process run even if the published_date of an entry from the first run is after the published_date of an entry from the 2nd run. 2. Within the grouping by run, the entries by sorted by published_date 3. Another restriction is that I prefer that data from the same source not be grouped together. If I do the sort by create_date, published_date I will end up with data from source a, data from source b etc. I prefer that the data within each hour be mixed up for better presentation If I add a column to this table and store a counter which increments each time the process is run, it is possible to create a query to sort first by counter and then by published_dt. Is there a way to do it without adding a field? I'm using Hibernate over MySQL. e.g. Hour 1 (run 1) 4 rows collected from site a (rows 1-4) 3 rows collected from site b (rows 5-7) hour 2 (run 2) 2 row collected from site a (rows 8-9) 3 rows collected from site b (rows 10-12) ... After each run, new records are added to the database from each website. The create date is the time when the record was created in the database. The published date is part of the content and is read in from the external source. When the results are displayed I would like rows to be grouped together based on the hour they were published in. So rows 1-7 would be displayed before rows 8-12. Within each hourly grouping, I would like to sort the results by published date (timestamp). This is necessary so that the posts from all the sites collected in that hour are not grouped together but rather mixed in with each other.

    Read the article

  • asp.net mvc 2 multithread

    - by Chris Cap
    I'm getting some weird behavior in asp.net MVC2 (at least it's not what I'm expecting). I have a controller httppost action that processes credit cards. I'm setting a session variable on the server to let me know if a user has attempted to process a card. I clear that session variable when I'm done. I am doing some very quick posts to test the process and the session variable is coming back null everytime because I clear the session variable at the end of the process. I have some debug prints that show me that the all requests are processed synchronously. That is...the first attempt occurs, it fails, session variable is cleared, second attempt tries and fails and the variable is cleared, etc... My first thought was that this was a side effect of debugging and that it just lined requests up for debugging purposes using the local webserver. So I put it on a development server and the same thing is occurring. Multiple submits/posts are processed synchronously. I even put a System.Threading.Sleep in there to make sure it would stop right after I set the session variable and it still won't even START the second request until the first is done. Has anyone else seen this behavior? My understanding was that a worker process was spawned for each request and that these actions could happen asychronously. Here's some psuedo code if (Session["CardCharged"] != null) return RedirectToAction("Index", "Problem"); Session["CardCharged"] = false; //starting the process System.Threading.Thread.Sleep(10000); //charge card here if (!providerResponse.IsApproved) { System.Diagnostics.Debug.WriteLine("failed"); Session["CardCharged"] = null; //have to allow them to charge again since this failed return View(myModel); } Session["CardCharged"] = true; return RedirectToAction("Index", "OrderComplete"); I should mention that my code ALWAYS fails the credit card check for testing purposes. I'm trying to test the situation where processing is STILL occurring and redirect the user elsewhere. I have set a dummy session variable elsewhere to assure that the session id "sticks". So I know the session id is the same for each request. Thanks

    Read the article

  • VB.Net 2008 IDE hanging - MSVB7.dll eating 100% CPU when editing code

    - by Andrew Backer
    I am having a problem with msvb7.dll eating 50%+ cpu on my dual core system. This usually lasts 10-30 seconds or so, during which time the IDE is non-responsive. This occurs when I do pretty much anything in the text editor, and can be replicated by simply adding blank lines to a function, and then deleting them. Or pasting some code. Or... lotsa stuff. SP1 installed I had DevExpress' refactor/coderush, components, and codeit.right installed, but have removed all 3 of them. (I had installed the latest version of Refactor Pro! (9.3.4), perhaps the day before) I have tried a VS.NET Repair. There is a kb that referenced some cpu destroying with vb, but it was included in SP1 Also: The solution consists of ~30 VB projects and 2 C# projects 8 other developers aren't having any issues with this (or at least not the SAME issues, we all have em) Clean get from TFS was done Project builds properly, can can even debug. This doesn't seem to happen on really small solutions, but perhaps it does and it just goes away super quick. Any clues at all as to what might be causing this, or how to fix it? I REALLY don't want to lose another day uninstalling and reinstalling and patching and so on =) If that even fixes it. Here is the stack trace (process explorer) that I get from the threads window when the msvb7.dll is churning. --- title in process explorer [threads] tab for process -------- cpu:49.28% cswitch delta: 300 to 3500 startaddress: [msvb7.dll+0x4218c] msvb7.dll version: 9.0.30729.1 --- actual stack trace ------- ntkrnlpa.exe!KiUnexpectedInterrupt+0x121 ntkrnlpa.exe!ZwYieldExecution+0x1c56 ntkrnlpa.exe!KiDispatchInterrupt+0x72e NDIS.sys!NdisFreeToBlockPool+0x15e1 // shortened stack trace. all of these are from msvb7, msvb7.dll+0x46ce7 <- 0x2676a <- 0x2698e <- 0x38031 <- 0x2659f <- 0x26644 msvb7.dll+0x25f29 <- 0x2ac7a <- 0x27522 <- 0x274a0 <- 0x2b5ce <- 0x2b6e4 msvb7.dll+0x67d0a <- 0x68551 <- 0x6817b <- 0x681f0 <- 0x67c38 <- 0x65fa8 msvb7.dll+0x666c6 <- 0x6672c <- 0x6673d <- 0x6677c <- 0x667b4 <- 0x63c77 msvb7.dll+0x63e97 <- 0x42c3a <- 0x42bc1 <- 0x41bd7 kernel32.dll!GetModuleFileNameA+0x1b4 This is the list of stuff from "copy info" in help-about, shortened to a resonable length. Microsoft Visual Studio 2008 | Version 9.0.30729.1 SP Microsoft Visual Studio 2008 Professional Edition - ENU Service Pack 1 (KB945140) KB945140 Microsoft .NET Framework | Version 3.5 SP1 Microsoft Visual Basic 2008 Microsoft Visual C# 2008 Microsoft Visual F# for Visual Studio 2008 Microsoft Visual Studio 2008 Team Explorer | Version 9.0.30729.1 Microsoft Visual Studio 2008 Tools for Office Microsoft Visual Web Developer 2008 Hotfix for Microsoft Visual Studio 2008 Professional Edition - ENU KB944899, KB945282, KB946040, KB946308, KB946344, KB946581, KB947171 KB947173, KB947180, KB947540, KB947789, KB948127, KB946260, KB946458, KB948816 Microsoft Recipe Framework Package 8.0 Process Editor WIT Designer 1.4.0.0 Process Editor for Microsoft Visual Studio Team Foundation Server, Version 1.4.0.0 tangible T4 Editor 9.0 tangible T4 Text Template Editor - T4 Editor tangibleprojectsystem 1.0 Team Foundation Server Power Tools October 2008 SQL Prompt 4.0 (disabled)

    Read the article

  • Announcing the release of the Windows Azure SDK 2.1 for .NET

    - by ScottGu
    Today we released the v2.1 update of the Windows Azure SDK for .NET.  This is a major refresh of the Windows Azure SDK and it includes some great new features and enhancements. These new capabilities include: Visual Studio 2013 Preview Support: The Windows Azure SDK now supports using the new VS 2013 Preview Visual Studio 2013 VM Image: Windows Azure now has a built-in VM image that you can use to host and develop with VS 2013 in the cloud Visual Studio Server Explorer Enhancements: Redesigned with improved filtering and auto-loading of subscription resources Virtual Machines: Start and Stop VM’s w/suspend billing directly from within Visual Studio Cloud Services: New Emulator Express option with reduced footprint and Run as Normal User support Service Bus: New high availability options, Notification Hub support, Improved VS tooling PowerShell Automation: Lots of new PowerShell commands for automating Web Sites, Cloud Services, VMs and more All of these SDK enhancements are now available to start using immediately and you can download the SDK from the Windows Azure .NET Developer Center.  Visual Studio’s Team Foundation Service (http://tfs.visualstudio.com/) has also been updated to support today’s SDK 2.1 release, and the SDK 2.1 features can now be used with it (including with automated builds + tests). Below are more details on the new features and capabilities released today: Visual Studio 2013 Preview Support Today’s Window Azure SDK 2.1 release adds support for the recent Visual Studio 2013 Preview. The 2.1 SDK also works with Visual Studio 2010 and Visual Studio 2012, and works side by side with the previous Windows Azure SDK 1.8 and 2.0 releases. To install the Windows Azure SDK 2.1 on your local computer, choose the “install the sdk” link from the Windows Azure .NET Developer Center. Then, chose which version of Visual Studio you want to use it with.  Clicking the third link will install the SDK with the latest VS 2013 Preview: If you don’t already have the Visual Studio 2013 Preview installed on your machine, this will also install Visual Studio Express 2013 Preview for Web. Visual Studio 2013 VM Image Hosted in the Cloud One of the requests we’ve heard from several customers has been to have the ability to host Visual Studio within the cloud (avoiding the need to install anything locally on your computer). With today’s SDK update we’ve added a new VM image to the Windows Azure VM Gallery that has Visual Studio Ultimate 2013 Preview, SharePoint 2013, SQL Server 2012 Express and the Windows Azure 2.1 SDK already installed on it.  This provides a really easy way to create a development environment in the cloud with the latest tools. With the recent shutdown and suspend billing feature we shipped on Windows Azure last month, you can spin up the image only when you want to do active development, and then shut down the virtual machine and not have to worry about usage charges while the virtual machine is not in use. You can create your own VS image in the cloud by using the New->Compute->Virtual Machine->From Gallery menu within the Windows Azure Management Portal, and then by selecting the “Visual Studio Ultimate 2013 Preview” template: Visual Studio Server Explorer: Improved Filtering/Management of Subscription Resources With the Windows Azure SDK 2.1 release you’ll notice significant improvements in the Visual Studio Server Explorer. The explorer has been redesigned so that all Windows Azure services are now contained under a single Windows Azure node.  From the top level node you can now manage your Windows Azure credentials, import a subscription file or filter Server Explorer to only show services from particular subscriptions or regions. Note: The Web Sites and Mobile Services nodes will appear outside the Windows Azure Node until the final release of VS 2013. If you have installed the ASP.NET and Web Tools Preview Refresh, though, the Web Sites node will appear inside the Windows Azure node even with the VS 2013 Preview. Once your subscription information is added, Windows Azure services from all your subscriptions are automatically enumerated in the Server Explorer. You no longer need to manually add services to Server Explorer individually. This provides a convenient way of viewing all of your cloud services, storage accounts, service bus namespaces, virtual machines, and web sites from one location: Subscription and Region Filtering Support Using the Windows Azure node in Server Explorer, you can also now filter your Windows Azure services in the Server Explorer by the subscription or region they are in.  If you have multiple subscriptions but need to focus your attention to just a few subscription for some period of time, this a handy way to hide the services from other subscriptions view until they become relevant. You can do the same sort of filtering by region. To enable this, just select “Filter Services” from the context menu on the Windows Azure node: Then choose the subscriptions and/or regions you want to filter by. In the below example, I’ve decided to show services from my pay-as-you-go subscription within the East US region: Visual Studio will then automatically filter the items that show up in the Server Explorer appropriately: With storage accounts and service bus namespaces, you sometimes need to work with services outside your subscription. To accommodate that scenario, those services allow you to attach an external account (from the context menu). You’ll notice that external accounts have a slightly different icon in server explorer to indicate they are from outside your subscription. Other Improvements We’ve also improved the Server Explorer by adding additional properties and actions to the service exposed. You now have access to most of the properties on a cloud service, deployment slot, role or role instance as well as the properties on storage accounts, virtual machines and web sites. Just select the object of interest in Server Explorer and view the properties in the property pane. We also now have full support for creating/deleting/update storage tables, blobs and queues from directly within Server Explorer.  Simply right-click on the appropriate storage account node and you can create them directly within Visual Studio: Virtual Machines: Start/Stop within Visual Studio Virtual Machines now have context menu actions that allow you start, shutdown, restart and delete a Virtual Machine directly within the Visual Studio Server Explorer. The shutdown action enables you to shut down the virtual machine and suspend billing when the VM is not is use, and easily restart it when you need it: This is especially useful in Dev/Test scenarios where you can start a VM – such as a SQL Server – during your development session and then shut it down / suspend billing when you are not developing (and no longer be billed for it). You can also now directly remote desktop into VMs using the “Connect using Remote Desktop” context menu command in VS Server Explorer.  Cloud Services: Emulator Express with Run as Normal User Support You can now launch Visual Studio and run your cloud services locally as a Normal User (without having to elevate to an administrator account) using a new Emulator Express option included as a preview feature with this SDK release.  Emulator Express is a version of the Windows Azure Compute Emulator that runs a restricted mode – one instance per role – and it doesn’t require administrative permissions and uses 40% less resources than the full Windows Azure Emulator. Emulator Express supports both web and worker roles. To run your application locally using the Emulator Express option, simply change the following settings in the Windows Azure project. On the shortcut menu for the Windows Azure project, choose Properties, and then choose the Web tab. Check the setting for IIS (Internet Information Services). Make sure that the option is set to IIS Express, not the full version of IIS. Emulator Express is not compatible with full IIS. On the Web tab, choose the option for Emulator Express. Service Bus: Notification Hubs With the Windows Azure SDK 2.1 release we are adding support for Windows Azure Notification Hubs as part of our official Windows Azure SDK, inside of Microsoft.ServiceBus.dll (previously the Notification Hub functionality was in a preview assembly). You are now able to create, update and delete Notification Hubs programmatically, manage your device registrations, and send push notifications to all your mobile clients across all platforms (Windows Store, Windows Phone 8, iOS, and Android). Learn more about Notification Hubs on MSDN here, or watch the Notification Hubs //BUILD/ presentation here. Service Bus: Paired Namespaces One of the new features included with today’s Windows Azure SDK 2.1 release is support for Service Bus “Paired Namespaces”.  Paired Namespaces enable you to better handle situations where a Service Bus service namespace becomes unavailable (for example: due to connectivity issues or an outage) and you are unable to send or receive messages to the namespace hosting the queue, topic, or subscription. Previously,to handle this scenario you had to manually setup separate namespaces that can act as a backup, then implement manual failover and retry logic which was sometimes tricky to get right. Service Bus now supports Paired Namespaces, which enables you to connect two namespaces together. When you activate the secondary namespace, messages are stored in the secondary queue for delivery to the primary queue at a later time. If the primary container (namespace) becomes unavailable for some reason, automatic failover enables the messages in the secondary queue. For detailed information about paired namespaces and high availability, see the new topic Asynchronous Messaging Patterns and High Availability. Service Bus: Tooling Improvements In this release, the Windows Azure Tools for Visual Studio contain several enhancements and changes to the management of Service Bus messaging entities using Visual Studio’s Server Explorer. The most noticeable change is that the Service Bus node is now integrated into the Windows Azure node, and supports integrated subscription management. Additionally, there has been a change to the code generated by the Windows Azure Worker Role with Service Bus Queue project template. This code now uses an event-driven “message pump” programming model using the QueueClient.OnMessage method. PowerShell: Tons of New Automation Commands Since my last blog post on the previous Windows Azure SDK 2.0 release, we’ve updated Windows Azure PowerShell (which is a separate download) five times. You can find the full change log here. We’ve added new cmdlets in the following areas: China instance and Windows Azure Pack support Environment Configuration VMs Cloud Services Web Sites Storage SQL Azure Service Bus China Instance and Windows Azure Pack We now support the following cmdlets for the China instance and Windows Azure Pack, respectively: China Instance: Web Sites, Service Bus, Storage, Cloud Service, VMs, Network Windows Azure Pack: Web Sites, Service Bus We will have full cmdlet support for these two Windows Azure environments in PowerShell in the near future. Virtual Machines: Stop/Start Virtual Machines Similar to the Start/Stop VM capability in VS Server Explorer, you can now stop your VM and suspend billing: If you want to keep the original behavior of keeping your stopped VM provisioned, you can pass in the -StayProvisioned switch parameter. Virtual Machines: VM endpoint ACLs We’ve added and updated a bunch of cmdlets for you to configure fine-grained network ACL on your VM endpoints. You can use the following cmdlets to create ACL config and apply them to a VM endpoint: New-AzureAclConfig Get-AzureAclConfig Set-AzureAclConfig Remove-AzureAclConfig Add-AzureEndpoint -ACL Set-AzureEndpoint –ACL The following example shows how to add an ACL rule to an existing endpoint of a VM. Other improvements for Virtual Machine management includes Added -NoWinRMEndpoint parameter to New-AzureQuickVM and Add-AzureProvisioningConfig to disable Windows Remote Management Added -DirectServerReturn parameter to Add-AzureEndpoint and Set-AzureEndpoint to enable/disable direct server return Added Set-AzureLoadBalancedEndpoint cmdlet to modify load balanced endpoints Cloud Services: Remote Desktop and Diagnostics Remote Desktop and Diagnostics are popular debugging options for Cloud Services. We’ve introduced cmdlets to help you configure these two Cloud Service extensions from Windows Azure PowerShell. Windows Azure Cloud Services Remote Desktop extension: New-AzureServiceRemoteDesktopExtensionConfig Get-AzureServiceRemoteDesktopExtension Set-AzureServiceRemoteDesktopExtension Remove-AzureServiceRemoteDesktopExtension Windows Azure Cloud Services Diagnostics extension New-AzureServiceDiagnosticsExtensionConfig Get-AzureServiceDiagnosticsExtension Set-AzureServiceDiagnosticsExtension Remove-AzureServiceDiagnosticsExtension The following example shows how to enable Remote Desktop for a Cloud Service. Web Sites: Diagnostics With our last SDK update, we introduced the Get-AzureWebsiteLog –Tail cmdlet to get the log streaming of your Web Sites. Recently, we’ve also added cmdlets to configure Web Site application diagnostics: Enable-AzureWebsiteApplicationDiagnostic Disable-AzureWebsiteApplicationDiagnostic The following 2 examples show how to enable application diagnostics to the file system and a Windows Azure Storage Table: SQL Database Previously, you had to know the SQL Database server admin username and password if you want to manage the database in that SQL Database server. Recently, we’ve made the experience much easier by not requiring the admin credential if the database server is in your subscription. So you can simply specify the -ServerName parameter to tell Windows Azure PowerShell which server you want to use for the following cmdlets. Get-AzureSqlDatabase New-AzureSqlDatabase Remove-AzureSqlDatabase Set-AzureSqlDatabase We’ve also added a -AllowAllAzureServices parameter to New-AzureSqlDatabaseServerFirewallRule so that you can easily add a firewall rule to whitelist all Windows Azure IP addresses. Besides the above experience improvements, we’ve also added cmdlets get the database server quota and set the database service objective. Check out the following cmdlets for details. Get-AzureSqlDatabaseServerQuota Get-AzureSqlDatabaseServiceObjective Set-AzureSqlDatabase –ServiceObjective Storage and Service Bus Other new cmdlets include Storage: CRUD cmdlets for Azure Tables and Queues Service Bus: Cmdlets for managing authorization rules on your Service Bus Namespace, Queue, Topic, Relay and NotificationHub Summary Today’s release includes a bunch of great features that enable you to build even better cloud solutions.  All the above features/enhancements are shipped and available to use immediately as part of the 2.1 release of the Windows Azure SDK for .NET. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Prepping a conference

    - by Laurent Bugnion
    I have had the chance to talk at many conferences these past few years, and came up with a way to prepare them which works really well for me. Most importantly, it would make it quite easy to overcome an emergency (for example if my laptop would suddenly lose data). The whole code as well as the slides and other documents are in the cloud. I also use source control for my demos, so that I always have the latest and the greatest, but also a history of changes I made to my demos. Finally I have a system of code snippets which works great, and I often had very positive remarks from the audience regarding that. Putting everything in the cloud The one thing I used to be the most scared of was a sudden crash of my laptop, and being unable to restore in time for a conference. Most conferences ask speakers to send slides a few days (or weeks…) in advance, but let's face it, we all have last minute changes to our talks and I always come in the conference with updated slides that I pass to the management team. The answer to that dilemma used to be working off memory sticks, and that worked not bad. However last year I started putting all the documents relating to a conference in a DropBox folder, and that works great too. Obviously DropBox works only if you have connectivity, so if I for instance update slides while on an international flight, I cannot save to the cloud. The obvious answer to that is to backup everything on a memory stick… but I have to admit, I have been trusting my luck and working off my laptop HD and then synching everything to the cloud after landing. Of course on some US national flights you get WiFi on board, so in that case it is even simpler :) Usually after the conference is done, I remove the files from DropBox and copy them to their "final destination". They are backed up from there to BackBlaze, the great online backup service I am using routinely (I currently have about 90GB of data in BackBlaze). Outlining the presentations I like to have a written outline of my presentations written somewhere. I keep it simple, just write the various sections of the presentation with timing. I guess it is a remnant of the time when I was a private pilot, and using checklists for flight preparation. For example: Demo about designability 15' (0:37) Switch to Blend Open MainPage.xaml Create a DataTemplate ... Here I can immediately see during the presentation if I am taking too much time for my demo (0:37 is where I need to be when I am done with this section of the presentation, and 15' is the time that this particular section takes). I keep these sections reasonable, I don't detail every step of the preparation. Typically I have one such section for every 10-15 minutes of my talks. Yes, I am timing my presentations. I keep adjusting these numbers when I rehearse, and this really helps to feel more confident during the presentations. This is especially important for presentations that are long, like my MIX11 demo which clocked at 57 minutes (I had a lot of stuff to show…). Such presentations are risky, because if anything goes wrong, you will have to cut stuff, so the answer to that is: Rehearse, rehearse and when you're done rehearsing, rehearse a little more. I also have a "Preparation" section where I outline what I need to do before a presentation. For instance: Preparation Reboot in VHD Make sure MSN and Twitter are not running. Open VS10 and load demo Open Blend and load demo Run the WP7 emulator ... I typically start preparing my laptop an hour before the talk, starting everything I need to start and then putting my laptop to sleep. Saving and printing the outline, Timing Printing is a real problem because it is really hard to find a printer at most conference venues, and also quite hard in hotels. To solve that, I simply write everything in OneNote (synched to the cloud, now you start to know what I like ;) and then I print it to a PDF (I use CutePDFWriter) that I save to my Kindle. During the presentation, I read the outline off the Kindle (I mostly just need a quick check to see how I am timing). For timing during the presentation, I use the free tool ChronoGPS on my Windows Phone 7, but of course any phone these days has a clock/chrono application. In some conferences, they even have timers that the presenters can see, but they tend to count down and I prefer to count up… so I just use my own :) Source control for demos For demos, I create a separate folder and use Mercurial as source control. Mercurial has the huge advantage (over SVN or TFS) to work offline too, so I can commit while on a plane, and all the history is saved. Then when I have connectivity I push everything to the cloud (I am using the fantastic Trunksapp.com for my private repositories). Here too the obvious downside is the risk of losing my last changes if my laptop crashes before I can push to the cloud, and here too the obvious answer would be to work from a memory stick… though I have to admit I didn't do that lately (except when I was writing Silverlight 4 Unleashed, where I was really paranoid…) And code snippets? I am one of these presenters who hates to type in front of an audience. I can type really fast (writing two books has this advantage, it really teaches you to touch type and be fast at it) but in the context of an audience, on a stage where it is often damn cold (an issue I had a lot in past conferences, air conditioning can freeze your fingers and make it really hard to type), it doesn't work as well. I don't know for you, but I really dislike seeing a presentation where the speaker uses the backspace key more often than others ;) To solve that, I like to have my code ready in snippets, and drag them to the screen. Then I can spend time explaining each code snippet, while highlighting portions of the code (always highlight what you talk about, the audience often doesn't even see the cursor and doesn't know where you are on the screen!) Over the years I have used various solutions for code snippets, and now I have one which works really well… if you take a few precautions! I use the Visual Studio Toolbox. Preparing the code snippets You can store code snippets in the Toolbox for anything, XAML, C# etc. I arrange the snippets in the order in which I need them, which is a great way to remember what comes next in the presentation. I also separate them by topic, to make it easier to find them, for example when I switch to the slides and then back to the code. Remember that no matter how experienced you are, you will feel more nervous on stage than while you are preparing, so any way to make it easier for you is going to be beneficial to the audience. To store a code snippet, I do the following: Open the final demo that you want to show to the audience in Visual Studio. In your code, select a snippet of code that you want to explain in particular. Make sure that the Visual Studio Toolbox is open (menu View, Toolbox or Ctrl-Alt-X). Drag the selected snippet from the code window to the toolbox. (if needed) drag the snippet to the correct location (for example between two other code snippets so that you can access it as you speak through the demo). Right click on the snippet and select Rename Item from the context menu. Select a meaningful name. For me I use the following conventions: If it is a method, I use the method's name. If it is not a whole method, I use a descriptive name. If it is the content of a method (i.e. the body only, without the method's signature), I use "-> MethodName". This reminds me during the presentation that this is only the body, and that I need to insert that into an existing signature. This is the case, for instance, when I use Visual Studio to automatically generate the members of an interface’s implementation; then I only need to insert my snippet inside the generated method body. Saving the snippets This is the most important!! It happened to me a few times that VS10 lost its settings. When that happens, the snippets are lost too! Yeah that really sucks, especially (as it happened once) when this is the case about an hour before a talk… Stress and sweat follows, not good conditions to start a talk in front of an audience believe me. Thankfully, saving snippets is really easy with the following steps: Select the menu Tools, Import and Export Settings. Select Export selected environment settings and press Next. Uncheck All Settings. Then expand General Settings and select Toolbox (only!). Press Next. Select your source control folder and save under a meaningful name (for instance Snippets.vssettings). Commit to source control and push to the cloud. By the way, this also has the advantage of applying source control to the snippets file (which is an XML file), so you get history for free on that file! Reimporting the snippets If VS loses its settings and you need to reimport the snippets, this can be done super easily and very fast. Make sure that the Toolbox is empty. When you import snippets, they are merged with existing ones, they do not replace the content of the Toolbox. Unless merging is really what you want, make sure that your Toolbox is clean before you import, it is really easier. Select the menu Tools, Import and Export Settings. Select Import selected environment settings and press Next. Select No, just import new settings and press Next. Press Browse and select the Snippets.vssettings file. Press Finish. Et voila, all your snippets appear again in the Toolbox. Whew, the worst was averted and you can start your demo without sweating! (I had to do that once literally 5 minutes before the start of a demo, while my laptop was already hooked to the projector, and it went just fine). What about special tools? When using special tools (for example beta versions of tools you have an early access to), or a special configuration of your laptop, things can get tricky because you cannot really be sure that you will get a laptop with the same tools and the same configuration at the conference. To solve that, I use the following precautions: I make my demos from a Virtual Hard Disk. The great John Papa made a very easy-to-follow web page where he explains how to create a VHD and install Win7 to it. This gives you the full power of your laptop (as fast as booting from the metal). For me, I have a basic configuration that I saved on a USB harddrive (Win7 plus drivers, basic settings for desktop, folder options, taskbar etc) and Visual Studio 2010 SP1 on it. When preparing, I start by copying this "basis VHD" to my laptop. I install additional tools and configurations. I save the VHD back to the USB harddrive in a different folder. This would allow me to reinstall my demo environment quite fast, for example in case of harddrive failure. Replace the harddrive, copy the VHD to it, configure the BCD and you can start. Unfortunately this only works if the laptop itself still works. In the worst case of total failure, my security is to back all the installers up: The installers I use are synched on all my laptops and backed up to BackBlaze. If the worst happens and my laptop is absolutely broken, I can download the installer from BackBlaze and install on another laptop. This of course takes some time, and if that happens 5 minutes before a presentation, well… I don't have an answer to that, except of course crossing my fingers. Still, all that gives me additional security. Conclusion Remember folks, talking to an audience, large or small, will make you nervous. Just ask Scott Hanselman :) The goal here is to create the best possible conditions for you, and to create an environment where everything is saved and easy to restore, where everything is well known and provides you with additional confidence. The cooler you feel before the presentation (and during ;)), the better your presentation will be. Here too, the goal is to provide the best user experience you can have, which in turn will make it more enjoyable for your audience! Happy presenting :) Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Nashorn in the Twitterverse, Continued

    - by jlaskey
    After doing the Twitter example, it seemed reasonable to try graphing the result with JavaFX.  At this time the Nashorn project doesn't have an JavaFX shell, so we have to go through some hoops to create an JavaFX application.  I thought showing you some of those hoops might give you some idea about what you can do mixing Nashorn and Java (we'll add a JavaFX shell to the todo list.) First, let's look at the meat of the application.  Here is the repackaged version of the original twitter example. var twitter4j      = Packages.twitter4j; var TwitterFactory = twitter4j.TwitterFactory; var Query          = twitter4j.Query; function getTrendingData() {     var twitter = new TwitterFactory().instance;     var query   = new Query("nashorn OR nashornjs");     query.since("2012-11-21");     query.count = 100;     var data = {};     do {         var result = twitter.search(query);         var tweets = result.tweets;         for each (tweet in tweets) {             var date = tweet.createdAt;             var key = (1900 + date.year) + "/" +                       (1 + date.month) + "/" +                       date.date;             data[key] = (data[key] || 0) + 1;         }     } while (query = result.nextQuery());     return data; } Instead of just printing out tweets, getTrendingData tallies "tweets per date" during the sample period (since "2012-11-21", the date "New Project: Nashorn" was posted.)   getTrendingData then returns the resulting tally object. Next, use JavaFX BarChart to display that data. var javafx         = Packages.javafx; var Stage          = javafx.stage.Stage var Scene          = javafx.scene.Scene; var Group          = javafx.scene.Group; var Chart          = javafx.scene.chart.Chart; var FXCollections  = javafx.collections.FXCollections; var ObservableList = javafx.collections.ObservableList; var CategoryAxis   = javafx.scene.chart.CategoryAxis; var NumberAxis     = javafx.scene.chart.NumberAxis; var BarChart       = javafx.scene.chart.BarChart; var XYChart        = javafx.scene.chart.XYChart; var Series         = XYChart.Series; var Data           = XYChart.Data; function graph(stage, data) {     var root = new Group();     stage.scene = new Scene(root);     var dates = Object.keys(data);     var xAxis = new CategoryAxis();     xAxis.categories = FXCollections.observableArrayList(dates);     var yAxis = new NumberAxis("Tweets", 0.0, 200.0, 50.0);     var series = FXCollections.observableArrayList();     for (var date in data) {         series.add(new Data(date, data[date]));     }     var tweets = new Series("Tweets", series);     var barChartData = FXCollections.observableArrayList(tweets);     var chart = new BarChart(xAxis, yAxis, barChartData, 25.0);     root.children.add(chart); } I should point out that there is a lot of subtlety going on in the background.  For example; stage.scene = new Scene(root) is equivalent to stage.setScene(new Scene(root)). If Nashorn can't find a property (scene), then it searches (via Dynalink) for the Java Beans equivalent (setScene.)  Also note, that Nashorn is magically handling the generic class FXCollections.  Finally,  with the call to observableArrayList(dates), Nashorn is automatically converting the JavaScript array dates to a Java collection.  It really is hard to identify which objects are JavaScript and which are Java.  Does it really matter? Okay, with the meat out of the way, let's talk about the hoops. When working with JavaFX, you start with a main subclass of javafx.application.Application.  This class handles the initialization of the JavaFX libraries and the event processing.  This is what I used for this example; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import javafx.application.Application; import javafx.stage.Stage; import javax.script.ScriptEngine; import javax.script.ScriptEngineManager; import javax.script.ScriptException; public class TrendingMain extends Application { private static final ScriptEngineManager MANAGER = new ScriptEngineManager(); private final ScriptEngine engine = MANAGER.getEngineByName("nashorn"); private Trending trending; public static void main(String[] args) { launch(args); } @Override public void start(Stage stage) throws Exception { trending = (Trending) load("Trending.js"); trending.start(stage); } @Override public void stop() throws Exception { trending.stop(); } private Object load(String script) throws IOException, ScriptException { try (final InputStream is = TrendingMain.class.getResourceAsStream(script)) { return engine.eval(new InputStreamReader(is, "utf-8")); } } } To initialize Nashorn, we use JSR-223's javax.script.  private static final ScriptEngineManager MANAGER = new ScriptEngineManager(); private final ScriptEngine engine = MANAGER.getEngineByName("nashorn"); This code sets up an instance of the Nashorn engine for evaluating scripts. The  load method reads a script into memory and then gets engine to eval that script.  Note, that load also returns the result of the eval. Now for the fun part.  There are several different approaches we could use to communicate between the Java main and the script.  In this example we'll use a Java interface.  The JavaFX main needs to do at least start and stop, so the following will suffice as an interface; public interface Trending {     public void start(Stage stage) throws Exception;     public void stop() throws Exception; } At the end of the example's script we add; (function newTrending() {     return new Packages.Trending() {         start: function(stage) {             var data = getTrendingData();             graph(stage, data);             stage.show();         },         stop: function() {         }     } })(); which instantiates a new subclass instance of Trending and overrides the start and stop methods.  The result of this function call is what is returned to main via the eval. trending = (Trending) load("Trending.js"); To recap, the script Trending.js contains functions getTrendingData, graph and newTrending, plus the call at the end to newTrending.  Back in the Java code, we cast the result of the eval (call to newTrending) to Trending, thus, we end up with an object that we can then use to call back into the script.  trending.start(stage); Voila. ?

    Read the article

  • WebDav issue with Mac OS X 10.5.3 onwards

    - by svnr
    Hi, We upgraded to Mac OS X 10.5.3 and getting problem when uploading files (PUT) to a webdav server (the server is Apache running on a Windows environment). When we drag and drop on to a webdav folder using Finder we get a -36 error. When looking at the stack trace of the web server the problem is due to INVALID CRLF or some times getting the following error. Both the stack point to error when copying the stream. When googled found that it is because the Mac changed to Transfer-Encoding to 'Chunked' ClientAbortException: java.net.SocketException: Software caused connection abort: socket write error at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:366) at org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:433) at org.apache.tomcat.util.buf.ByteChunk.append(ByteChunk.java:348) at org.apache.catalina.connector.OutputBuffer.writeBytes(OutputBuffer.java:392) at org.apache.catalina.connector.OutputBuffer.write(OutputBuffer.java:381) at org.apache.catalina.connector.CoyoteOutputStream.write(CoyoteOutputStream.java:88) at org.apache.commons.io.CopyUtils.copy(CopyUtils.java:200) at com.artesia.webdav.action.helper.ResponseWriterHelper.writeFileContentResponse(ResponseWriterHelper.java:206) at com.artesia.webdav.action.GetMethodAction.executeWebDavMethod(GetMethodAction.java:147) at com.artesia.webdav.action.BaseWebDavMethodAction.execute(BaseWebDavMethodAction.java:257) at com.artesia.webdav.action.BaseWebDavAction.execute(BaseWebDavAction.java:92) at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:484) at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:274) at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1482) at org.apache.struts.action.ActionServlet.doGet(ActionServlet.java:507) at javax.servlet.http.HttpServlet.service(HttpServlet.java:697) at com.artesia.webdav.web.WebDavActionServlet.service(WebDavActionServlet.java:93) at javax.servlet.http.HttpServlet.service(HttpServlet.java:810) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:252) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:672) at org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:463) at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:398) at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:301) at org.apache.struts.action.RequestProcessor.doForward(RequestProcessor.java:1069) at org.apache.struts.action.RequestProcessor.processForwardConfig(RequestProcessor.java:455) at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:279) at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1482) at org.apache.struts.action.ActionServlet.doGet(ActionServlet.java:507) at javax.servlet.http.HttpServlet.service(HttpServlet.java:697) at com.artesia.webdav.web.WebDavActionServlet.service(WebDavActionServlet.java:93) at javax.servlet.http.HttpServlet.service(HttpServlet.java:810) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:252) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:672) at org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:463) at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:398) at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:301) at com.artesia.webdav.web.BaseWebDavServlet.forward(BaseWebDavServlet.java:91) at com.artesia.webdav.web.BaseWebDavServlet.service(BaseWebDavServlet.java:83) at javax.servlet.http.HttpServlet.service(HttpServlet.java:810) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:252) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) at com.artesia.webdav.action.RequestFilter.doFilter(RequestFilter.java:46) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:202) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) at com.artesia.webdav.web.WebDavAuthenticationFilter.doFilter(WebDavAuthenticationFilter.java:463) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:202) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) at com.artesia.webdav.web.MacSessionHackFilter.doFilter(MacSessionHackFilter.java:111) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:202) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) at org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:202) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:178) at org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:175) at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:74) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:126) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:105) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:107) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:148) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:869) at org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(Http11BaseProtocol.java:664) at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:527) at org.apache.tomcat.util.net.MasterSlaveWorkerThread.run(MasterSlaveWorkerThread.java:112) at java.lang.Thread.run(Thread.java:595) Caused by: java.net.SocketException: Software caused connection abort: socket write error at java.net.SocketOutputStream.socketWrite0(Native Method) at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92) at java.net.SocketOutputStream.write(SocketOutputStream.java:136) at org.apache.coyote.http11.InternalOutputBuffer.realWriteBytes(InternalOutputBuffer.java:746) at org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:433) at org.apache.tomcat.util.buf.ByteChunk.append(ByteChunk.java:348) at org.apache.coyote.http11.InternalOutputBuffer$OutputStreamOutputBuffer.doWrite(InternalOutputBuffer.java:769) at org.apache.coyote.http11.filters.IdentityOutputFilter.doWrite(IdentityOutputFilter.java:117) at org.apache.coyote.http11.InternalOutputBuffer.doWrite(InternalOutputBuffer.java:579) at org.apache.coyote.Response.doWrite(Response.java:559) at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:361)

    Read the article

  • WebDav issue with Mac OS X 10.5.3 onwards

    - by svnr
    We upgraded to Mac OS X 10.5.3 and getting problem when uploading files (PUT) to a webdav server (the server is Apache running on a Windows environment). When we drag and drop on to a webdav folder using Finder we get a -36 error. When looking at the stack trace of the web server the problem is due to INVALID CRLF or some times getting the following error. Both the stack point to error when copying the stream. When googled found that it is because the Mac changed to Transfer-Encoding to 'Chunked' ClientAbortException: java.net.SocketException: Software caused connection abort: socket write error at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:366) at org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:433) at org.apache.tomcat.util.buf.ByteChunk.append(ByteChunk.java:348) at org.apache.catalina.connector.OutputBuffer.writeBytes(OutputBuffer.java:392) at org.apache.catalina.connector.OutputBuffer.write(OutputBuffer.java:381) at org.apache.catalina.connector.CoyoteOutputStream.write(CoyoteOutputStream.java:88) at org.apache.commons.io.CopyUtils.copy(CopyUtils.java:200) at com.artesia.webdav.action.helper.ResponseWriterHelper.writeFileContentResponse(ResponseWriterHelper.java:206) at com.artesia.webdav.action.GetMethodAction.executeWebDavMethod(GetMethodAction.java:147) at com.artesia.webdav.action.BaseWebDavMethodAction.execute(BaseWebDavMethodAction.java:257) at com.artesia.webdav.action.BaseWebDavAction.execute(BaseWebDavAction.java:92) at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:484) at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:274) at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1482) at org.apache.struts.action.ActionServlet.doGet(ActionServlet.java:507) at javax.servlet.http.HttpServlet.service(HttpServlet.java:697) at com.artesia.webdav.web.WebDavActionServlet.service(WebDavActionServlet.java:93) at javax.servlet.http.HttpServlet.service(HttpServlet.java:810) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:252) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:672) at org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:463) at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:398) at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:301) at org.apache.struts.action.RequestProcessor.doForward(RequestProcessor.java:1069) at org.apache.struts.action.RequestProcessor.processForwardConfig(RequestProcessor.java:455) at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:279) at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1482) at org.apache.struts.action.ActionServlet.doGet(ActionServlet.java:507) at javax.servlet.http.HttpServlet.service(HttpServlet.java:697) at com.artesia.webdav.web.WebDavActionServlet.service(WebDavActionServlet.java:93) at javax.servlet.http.HttpServlet.service(HttpServlet.java:810) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:252) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:672) at org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:463) at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:398) at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:301) at com.artesia.webdav.web.BaseWebDavServlet.forward(BaseWebDavServlet.java:91) at com.artesia.webdav.web.BaseWebDavServlet.service(BaseWebDavServlet.java:83) at javax.servlet.http.HttpServlet.service(HttpServlet.java:810) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:252) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) at com.artesia.webdav.action.RequestFilter.doFilter(RequestFilter.java:46) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:202) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) at com.artesia.webdav.web.WebDavAuthenticationFilter.doFilter(WebDavAuthenticationFilter.java:463) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:202) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) at com.artesia.webdav.web.MacSessionHackFilter.doFilter(MacSessionHackFilter.java:111) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:202) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) at org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:202) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:178) at org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:175) at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:74) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:126) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:105) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:107) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:148) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:869) at org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(Http11BaseProtocol.java:664) at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:527) at org.apache.tomcat.util.net.MasterSlaveWorkerThread.run(MasterSlaveWorkerThread.java:112) at java.lang.Thread.run(Thread.java:595) Caused by: java.net.SocketException: Software caused connection abort: socket write error at java.net.SocketOutputStream.socketWrite0(Native Method) at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92) at java.net.SocketOutputStream.write(SocketOutputStream.java:136) at org.apache.coyote.http11.InternalOutputBuffer.realWriteBytes(InternalOutputBuffer.java:746) at org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:433) at org.apache.tomcat.util.buf.ByteChunk.append(ByteChunk.java:348) at org.apache.coyote.http11.InternalOutputBuffer$OutputStreamOutputBuffer.doWrite(InternalOutputBuffer.java:769) at org.apache.coyote.http11.filters.IdentityOutputFilter.doWrite(IdentityOutputFilter.java:117) at org.apache.coyote.http11.InternalOutputBuffer.doWrite(InternalOutputBuffer.java:579) at org.apache.coyote.Response.doWrite(Response.java:559) at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:361)

    Read the article

  • Using Upstart to manage Unicorn w/ rbenv + bundler binstubs w/ ruby-local-exec shebang

    - by codykrieger
    Alright, this is melting my brain. It might have something to do with the fact that I don't understand Upstart as well as I should. Sorry in advance for the long question. I'm trying to use Upstart to manage a Rails app's Unicorn master process. Here is my current /etc/init/app.conf: description "app" start on runlevel [2] stop on runlevel [016] console owner # expect daemon script APP_ROOT=/home/deploy/app PATH=/home/deploy/.rbenv/shims:/home/deploy/.rbenv/bin:$PATH $APP_ROOT/bin/unicorn -c $APP_ROOT/config/unicorn.rb -E production # >> /tmp/upstart.log 2>&1 end script # respawn That works just fine - the Unicorns start up great. What's not great is that the PID detected is not of the Unicorn master, it's of an sh process. That in and of itself isn't so bad, either - if I wasn't using the automagical Unicorn zero-downtime deployment strategy. Because shortly after I send -USR2 to my Unicorn master, a new master spawns up, and the old one dies...and so does the sh process. So Upstart thinks my job has died, and I can no longer restart it with restart or stop it with stop if I want. I've played around with the config file, trying to add -D to the Unicorn line (like this: $APP_ROOT/bin/unicorn -c $APP_ROOT/config/unicorn.rb -E production -D) to daemonize Unicorn, and I added the expect daemon line, but that didn't work either. I've tried expect fork as well. Various combinations of all of those things can cause start and stop to hang, and then Upstart gets really confused about the state of the job. Then I have to restart the machine to fix it. I think Upstart is having problems detecting when/if Unicorn is forking because I'm using rbenv + the ruby-local-exec shebang in my $APP_ROOT/bin/unicorn script. Here it is: #!/usr/bin/env ruby-local-exec # # This file was generated by Bundler. # # The application 'unicorn' is installed as part of a gem, and # this file is here to facilitate running it. # require 'pathname' ENV['BUNDLE_GEMFILE'] ||= File.expand_path("../../Gemfile", Pathname.new(__FILE__).realpath) require 'rubygems' require 'bundler/setup' load Gem.bin_path('unicorn', 'unicorn') Additionally, the ruby-local-exec script looks like this: #!/usr/bin/env bash # # `ruby-local-exec` is a drop-in replacement for the standard Ruby # shebang line: # # #!/usr/bin/env ruby-local-exec # # Use it for scripts inside a project with an `.rbenv-version` # file. When you run the scripts, they'll use the project-specified # Ruby version, regardless of what directory they're run from. Useful # for e.g. running project tasks in cron scripts without needing to # `cd` into the project first. set -e export RBENV_DIR="${1%/*}" exec ruby "$@" So there's an exec in there that I'm worried about. It fires up a Ruby process, which fires up Unicorn, which may or may not daemonize itself, which all happens from an sh process in the first place...which makes me seriously doubt the ability of Upstart to track all of this nonsense. Is what I'm trying to do even possible? From what I understand, the expect stanza in Upstart can only be told (via daemon or fork) to expect a maximum of two forks.

    Read the article

  • MS Securily Essentials efficiency / usage, suspicious processes

    - by biggvsdiccvs
    I recently noticed that my (originally pretty fast) Windows 7 Pro laptop started getting slow and using a lot of CPU power for no apparent reason. A full scan by Microsoft Security Essentials revealed nothing. After some investigation, I found multiple instances of a strange process called urpev.exe and a couple of similar exe files sitting in subdirectories of Users//AppData/Roaming (this particular one was in a folder called Xyceowme). Description: "Mescrosift Visaal Studie 2010". Company name: "Mesrosift Corporatien". Is it a virus or something? :) Now, all of these exe files were scheduled to be started from the Task Scheduler by tasks with names like "Security Center Update - 1291373911" and similar. My user name was listed as the author of the tasks. I disabled the tasks, restarted the computer in safe mode and moved all of the exe files to quarantine for further investigation. All of this was done last night. I just scanned the files with Security Essentials again (not updated since yesterday) in the quarantine location and this time it found PWS:Win32/Zbot.gen!plock in urpev.exe (but not in the other exe files, which are most likely viruses, too). Category: Password Stealer Description: This program is dangerous and captures user passwords. Another strange process is browser.exe (not chrome.exe) by Google Inc., described as Google Chrome. I uninstalled Chrome but it's still there. It runs out of Users\\AppData\LocalLow\UIVoice\ToolMedium\browser.exe and if I move it in safe mode, it just reappears there, and multiple instances run. Needless to say, it I kill it, it just runs again. Couldn't see anything in Task Scheduler, but found a couple of references to it in the Registry Editor: HKEY_CURRENT_USER/Software/Microsoft/Internet Explorer/LowRegistry/Audio/PolicyConfig/PropertyStore/ HKEY_USERS/S-1-5-21-1685709306-872053864-2599010960-1002/Software/Microsoft/Internet Explorer/LowRegistry/Audio/PolicyConfig/PropertyStore/ Maybe it's a legit process, but seems kind of strange. For the time being, I suspended the process and killed all of the child processes when I booted up the laptop. I used Security Essentials to scan the system periodically, but obviously it's not effective at least against one virus. I had the "real-time protection" turned off. Would it help if it were turned on and how much of a nuisance would it be? I wonder if there is a better alternative to Security Essentials. Over the years I've used multiple antivirus products at home and especially at work and was not very happy with any of them. Apparently, asking for software recommendations or comparisons is taboo here, but I will mention that I installed Malware Bytes and it was able to find an quarantine a bunch of suspicious files, and at least some of which were truly infected, but when it scans the bogus security center update executables from Mesrosift Corporatien, it finds nothing wrong. Also, any thoughts on the browser.exe mystery? Neither MS Security Essentials nor Malware Bytes found anything wrong with that file. However, after I ran a Malware Bytes scan and quarantined everything it found suspicious and rebooted the laptop, the process did not run.

    Read the article

  • Can't sync filesystem without reboot

    - by Fabio
    I'm having an issue with a linux server. Once a week the running mysql instance hangs and there is no way to fully stop it. If I kill it, it remains in zombie status and init does not reap its pid. The server is used for staging deployments and some internal tools, so it's not under heavy load. The only process constantly used id mysql and for this I think that it's the only process which suffer of this issue. I've searched system logs for errors and the only thing I found is this error (repeated a couple of times) in dmesg output: [706560.640085] INFO: task mysqld:31965 blocked for more than 120 seconds. [706560.640198] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [706560.640312] mysqld D ffff88032fd93f40 0 31965 1 0x00000000 [706560.640317] ffff880242a27d18 0000000000000086 ffff88031a50dd00 ffff880242a27fd8 [706560.640321] ffff880242a27fd8 ffff880242a27fd8 ffff88031e549740 ffff88031a50dd00 [706560.640325] ffff88031a50dd00 ffff88032fd947f8 0000000000000002 ffffffff8112f250 [706560.640328] Call Trace: [706560.640338] [<ffffffff8112f250>] ? __lock_page+0x70/0x70 [706560.640344] [<ffffffff816cb1b9>] schedule+0x29/0x70 [706560.640347] [<ffffffff816cb28f>] io_schedule+0x8f/0xd0 [706560.640350] [<ffffffff8112f25e>] sleep_on_page+0xe/0x20 [706560.640353] [<ffffffff816c9900>] __wait_on_bit+0x60/0x90 [706560.640356] [<ffffffff8112f390>] wait_on_page_bit+0x80/0x90 [706560.640360] [<ffffffff8107dce0>] ? autoremove_wake_function+0x40/0x40 [706560.640363] [<ffffffff8112f891>] filemap_fdatawait_range+0x101/0x190 [706560.640366] [<ffffffff81130975>] filemap_write_and_wait_range+0x65/0x70 [706560.640371] [<ffffffff8122e441>] ext4_sync_file+0x71/0x320 [706560.640376] [<ffffffff811c3e6d>] do_fsync+0x5d/0x90 [706560.640379] [<ffffffff811c40d0>] sys_fsync+0x10/0x20 [706560.640383] [<ffffffff816d495d>] system_call_fastpath+0x1a/0x1f When this happens the only way to make everything working again is a full reboot, but in order to do that I'm forced to use this command after I've manually stopped all running processes echo b > /proc/sysrq-trigger otherwise normal reboot process hangs forever. I've tracked reboots script and I've found out that also the reboot process hangs on a sync call, this one in /etc/init.d/sendsigs (I'm on ubuntu) # Flush the kernel I/O buffer before we start to kill # processes, to make sure the IO of already stopped services to # not slow down the remaining processes to a point where they # are accidentily killed with SIGKILL because they did not # manage to shut down in time. sync I'm almost sure that the cause of this is an hardware issue (the RAID controller???) also because I've other two machines with the same hardware and software configuration and they don't suffer of this, but I can't find any hint in syslog or dmesg. I've also installed smartmontools and mcelog packages but none of them did report any issue. What can I do to track the cause of this issue? Today is happened again, here is the status of system after triggering a reboot init---console-kit-dae---64*[{console-kit-dae}] +-dbus-daemon +-mcelog +-mysqld---{mysqld} +-newrelic-daemon---newrelic-daemon---11*[{newrelic-daemon}] +-ntpd +-polkitd---{polkitd} +-python3 +-rpc.idmapd +-rpc.statd +-rpcbind +-sh---rc---S20sendsigs---sync +-smartd +-snmpd +-sshd---sshd---zsh---sudo---zsh---pstree +-sshd---sshd---zsh---sudo---zsh And here is the status of sync process # ps aux | grep sync root 3637 0.1 0.0 4352 372 ? D 05:53 0:00 sync i.e. Uninterruptible sleep... Hardware specs as reported by lshw I think the raid controller is a fake raid. I usually don't deal with hardware (and for the record I don't have physical access to it) description: Computer product: X7DBP () vendor: Supermicro version: 0123456789 serial: 0123456789 width: 64 bits capabilities: smbios-2.4 dmi-2.4 vsyscall32 configuration: administrator_password=disabled boot=normal frontpanel_password=unknown keyboard_password=unknown power-on_password=disabled uuid=53D19F64-D663-A017-8922-0030487C1FEE *-core description: Motherboard product: X7DBP vendor: Supermicro physical id: 0 version: PCB Version serial: 0123456789 *-firmware description: BIOS vendor: Phoenix Technologies LTD physical id: 0 version: 6.00 date: 05/29/2007 size: 106KiB capacity: 960KiB capabilities: pci pnp upgrade shadowing escd cdboot bootselect edd int13floppy2880 acpi usb ls120boot zipboot biosbootspecification *-storage description: RAID bus controller product: 631xESB/632xESB SATA RAID Controller vendor: Intel Corporation physical id: 1f.2 bus info: pci@0000:00:1f.2 version: 09 width: 32 bits clock: 66MHz capabilities: storage pm bus_master cap_list configuration: driver=ahci latency=0 resources: irq:19 ioport:18a0(size=8) ioport:1874(size=4) ioport:1878(size=8) ioport:1870(size=4) ioport:1880(size=32) memory:d8500400-d85007ff

    Read the article

  • Set up lnux box for hosting a-z

    - by microchasm
    I am in the process of reinstalling the OS on a machine that will be used to host a couple of apps for our business. The apps will be local only; access from external clients will be via vpn only. The prior setup used a hosting control panel (Plesk) for most of the admin, and I was looking at using another similar piece of software for the reinstall - but I figured I should finally learn how it all works. I can do most of the things the software would do for me, but am unclear on the symbiosis of it all. This is all an attempt to further distance myself from the land of Configuration Programmer/Programmer, if at all possible. I can't find a full walkthrough anywhere for what I'm looking for, so I thought I'd put up this question, and if people can help me on the way I will edit this with the answers, and document my progress/pitfalls. Hopefully someday this will help someone down the line. The details: CentOS 5.5 x86_64 httpd: Apache/2.2.3 mysql: 5.0.77 (to be upgraded) php: 5.1 (to be upgraded) The requirements: SECURITY!! Secure file transfer Secure client access (SSL Certs and CA) Secure data storage Virtualhosts/multiple subdomains Local email would be nice, but not critical The Steps: Download latest CentOS DVD-iso (torrent worked great for me). Install CentOS: While going through the install, I checked the Server Components option thinking I was going to be using another Plesk-like admin. In hindsight, considering I've decided to try to go my own way, this probably wasn't the best idea. Basic config: Setup users, networking/ip address etc. Yum update/upgrade. Upgrade PHP: To upgrade PHP to the latest version, I had to look to another repo outside CentOS. IUS looks great and I'm happy I found it! cd /tmp #wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/epel-release-1-1.ius.el5.noarch.rpm #rpm -Uvh epel-release-1-1.ius.el5.noarch.rpm #wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/ius-release-1-4.ius.el5.noarch.rpm #rpm -Uvh ius-release-1-4.ius.el5.noarch.rpm yum list | grep -w \.ius\. [will list all packages available in the IUS repo] rpm -qa | grep php [will list installed packages needed to be removed. the installed packages need to be removed before you can install the IUS packages otherwise there will be conflicts] #yum shell >remove php-gd php-cli php-odbc php-mbstring php-pdo php php-xml php-common php-ldap php-mysql php-imap Setting up Remove Process >install php53 php53-mcrypt php53-mysql php53-cli php53-common php53-ldap php53-imap php53-devel >transaction solve >transaction run Leaving Shell #php -v PHP 5.3.2 (cli) (built: Apr 6 2010 18:13:45) This process removes the old version of PHP and installs the latest. To upgrade mysql: Pretty much the same process as above with PHP #/etc/init.d/mysqld stop [OK] rpm -qa | grep mysql [installed mysql packages] #yum shell >remove mysql mysql-server Setting up Remove Process >install mysql51 mysql51-server mysql51-devel >transaction solve >transaction run Leaving Shell #service mysqld start [OK] #mysql -v Server version: 5.1.42-ius Distributed by The IUS Community Project The above upgrade instructions courtesy of IUS wiki: http://wiki.iuscommunity.org/Doc/ClientUsageGuide Create a chroot jail to hold sftp user via rssh. This will force SCP/SFTP and will circumvent traditional FTP server setup. #cd /tmp #wget http://dag.wieers.com/rpm/packages/rssh/rssh-2.3.2-1.2.el5.rf.x86_64.rpm #rpm -ivh rssh-2.3.2-1.2.el5.rf.x86_64.rpm #useradd -m -d /home/dev -s /usr/bin/rssh dev #passwd dev Edit /etc/rssh.conf to grant access to SFTP to rssh users. #vi /etc/rssh.conf Uncomment line allowscp This allows me to connect to the machine via SFTP protocol in Transmit (my FTP program of choice; I'm sure it's similar with other FTP apps). Above instructions for SFTP appropriated (with appreciation!) from http://www.cyberciti.biz/tips/linux-unix-restrict-shell-access-with-rssh.html And this is where I'm at. I will keep editing this as I make progress. Any tips on how to Configure virtual interfaces/ip based virtual hosts for SSL, setting up a CA, or anything else would be appreciated.

    Read the article

  • Amazon EC2 Instance - m1.medium Ubuntu 12.04 - Started to crash three days ago

    - by Joy
    The environment: Amazon EC2 Instance - m1.medium Ubuntu 12.04 Apache 2.2.22 - Running a Drupal Site Using MySQL DB Server RAM info: ~$ free -gt total used free shared buffers cached Mem: 3 1 2 0 0 0 -/+ buffers/cache: 0 2 Swap: 0 0 0 Total: 3 1 2 Hard drive info: Filesystem Size Used Avail Use% Mounted on /dev/xvda1 7.9G 4.7G 2.9G 62% / udev 1.9G 8.0K 1.9G 1% /dev tmpfs 751M 180K 750M 1% /run none 5.0M 0 5.0M 0% /run/lock none 1.9G 0 1.9G 0% /run/shm /dev/xvdb 394G 199M 374G 1% /mnt The problem About two days ago the site started failing becaue the MySQL server was shut down by Apache with the following message: kernel: [2963685.664359] [31716] 106 31716 226946 22748 0 0 0 mysqld kernel: [2963685.664730] Out of memory: Kill process 31716 (mysqld) score 23 or sacrifice child kernel: [2963685.664764] Killed process 31716 (mysqld) total-vm:907784kB, anon-rss:90992kB, file-rss:0kB kernel: [2963686.153608] init: mysql main process (31716) killed by KILL signal kernel: [2963686.169294] init: mysql main process ended, respawning That states that the VM was occupying 0.9GB, but my Ram has 2GB free, so 1GB was still left free. I understand that in Linux applications can allocate more memory than physically available. I don't know if this is the problme, it's the first time that it has started to happen. Obviously, the MySQL server tries to restart, but there's no memory for it apparently and it won't restart. Here is its error log: Plugin 'FEDERATED' is disabled. The InnoDB memory heap is disabled Mutexes and rw_locks use GCC atomic builtins Compressed tables use zlib 1.2.3.4 Initializing buffer pool, size = 128.0M InnoDB: mmap(137363456 bytes) failed; errno 12 Completed initialization of buffer pool Fatal error: cannot allocate memory for the buffer pool Plugin 'InnoDB' init function returned error. Plugin 'InnoDB' registration as a STORAGE ENGINE failed. Unknown/unsupported storage engine: InnoDB [ERROR] Aborting [Note] /usr/sbin/mysqld: Shutdown complete I simply restarted the Mysql service. About two hours later it happened again. I restarted it. Then it happened again 9 hours later. So then I thought of the MaxClients parameter of apache.conf, so I went to check it out. It was set at 150. I decided to drop it down to 60. As so: <IfModule mpm_prefork_module> ... MaxClients 60 </IfModule> <IfModule mpm_worker_module> ... MaxClients 60 </IfModule> <IfModule mpm_event_module> ... MaxClients 60 </IfModule> Once I did that, I had the apache2 service restart and it all went smoothly for 3/4 of a day. Since at night the MySQL service shut down once again, but this time it wasn't killed by the Apache2 service. Instead it called the OOM-Killer with the following message: kernel: [3104680.005312] mysqld invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0 kernel: [3104680.005351] [<ffffffff81119795>] oom_kill_process+0x85/0xb0 kernel: [3104680.548860] init: mysql main process (30821) killed by KILL signal Now I'm out of ideas. Some articles state that the ideal thing to do is change the kernel behaviour with the following (include it to the file /etc/sysctl.conf ) vm.overcommit_memory = 2 vm.overcommit_ratio = 80 So no overcommits will take place. I'm wondering if this is the way to go? Keep in mind I'm no server administrator, I have basic knowldege. Thanks a bunch in advance.

    Read the article

  • Set up lnux box for hosting a-z [apache mysql php ssl]

    - by microchasm
    I am in the process of reinstalling the OS on a machine that will be used to host a couple of apps for our business. The apps will be local only; access from external clients will be via vpn only. The prior setup used a hosting control panel (Plesk) for most of the admin, and I was looking at using another similar piece of software for the reinstall - but I figured I should finally learn how it all works. I can do most of the things the software would do for me, but am unclear on the symbiosis of it all. This is all an attempt to further distance myself from the land of Configuration Programmer/Programmer, if at all possible. I can't find a full walkthrough anywhere for what I'm looking for, so I thought I'd put up this question, and if people can help me on the way I will edit this with the answers, and document my progress/pitfalls. Hopefully someday this will help someone down the line. The details: CentOS 5.5 x86_64 httpd: Apache/2.2.3 mysql: 5.0.77 (to be upgraded) php: 5.1 (to be upgraded) The requirements: SECURITY!! Secure file transfer Secure client access (SSL Certs and CA) Secure data storage Virtualhosts/multiple subdomains Local email would be nice, but not critical The Steps: Download latest CentOS DVD-iso (torrent worked great for me). Install CentOS: While going through the install, I checked the Server Components option thinking I was going to be using another Plesk-like admin. In hindsight, considering I've decided to try to go my own way, this probably wasn't the best idea. Basic config: Setup users, networking/ip address etc. Yum update/upgrade. Upgrade PHP: To upgrade PHP to the latest version, I had to look to another repo outside CentOS. IUS looks great and I'm happy I found it! cd /tmp #wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/epel-release-1-1.ius.el5.noarch.rpm #rpm -Uvh epel-release-1-1.ius.el5.noarch.rpm #wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/ius-release-1-4.ius.el5.noarch.rpm #rpm -Uvh ius-release-1-4.ius.el5.noarch.rpm yum list | grep -w \.ius\. [will list all packages available in the IUS repo] rpm -qa | grep php [will list installed packages needed to be removed. the installed packages need to be removed before you can install the IUS packages otherwise there will be conflicts] #yum shell >remove php-gd php-cli php-odbc php-mbstring php-pdo php php-xml php-common php-ldap php-mysql php-imap Setting up Remove Process >install php53 php53-mcrypt php53-mysql php53-cli php53-common php53-ldap php53-imap php53-devel >transaction solve >transaction run Leaving Shell #php -v PHP 5.3.2 (cli) (built: Apr 6 2010 18:13:45) This process removes the old version of PHP and installs the latest. To upgrade mysql: Pretty much the same process as above with PHP #/etc/init.d/mysqld stop [OK] rpm -qa | grep mysql [installed mysql packages] #yum shell >remove mysql mysql-server Setting up Remove Process >install mysql51 mysql51-server mysql51-devel >transaction solve >transaction run Leaving Shell #service mysqld start [OK] #mysql -v Server version: 5.1.42-ius Distributed by The IUS Community Project And this is where I'm at. I will keep editing this as I make progress. Any tips on how to Configure Virtualhosts for SSL, setting up a CA, setting up SFTP with openSSH, or anything else would be appreciated.

    Read the article

  • A class meant for an alfresco behavior and its bean, how do they work and how are they deployed trough eclipse

    - by MrHappy
    (This is a partial repost of a question asked 10 days ago because only 1 part was answered(not included), I've rewritten it into a way better question and added 3 more tags) where do I put the DeleteAsset.class or why isn't it being found? I've put the compiled class from the bin of the workspace of eclipse into alfresco-4.2.c/tomcat/webapps/alfresco/WEB-INF/classes/com/openerp/behavior/ and right now it's giving me Error loading class [com.openerp.behavior.DeleteAsset] for bean with name 'deletionBehavior' defined in URL [file:/home/openerp/alfresco-4.2.c/tomcat/shared/classes/alfresco/extension/cust??om-web-context.xml]: problem with class file or dependent class; nested exception is java.lang.NoClassDefFoundError: com/openerp/behavior/DeleteAsset (wrong name: DeleteAsset) when I put it in there. (See bean below!) The code(I'd trying to work without the model class, idk if I made any silly mistakes on that): package com.openerp.behavior; import java.util.List; import java.net.*; import java.io.*; import org.alfresco.repo.node.NodeServicePolicies; import org.alfresco.repo.policy.Behaviour; import org.alfresco.repo.policy.JavaBehaviour; import org.alfresco.repo.policy.PolicyComponent; import org.alfresco.repo.policy.Behaviour.NotificationFrequency; import org.alfresco.repo.security.authentication.AuthenticationUtil; import org.alfresco.repo.security.authentication.AuthenticationUtil.RunAsWork; import org.alfresco.service.cmr.repository.ChildAssociationRef; import org.alfresco.service.cmr.repository.NodeRef; import org.alfresco.service.cmr.repository.NodeService; import org.alfresco.service.namespace.NamespaceService; import org.alfresco.service.namespace.QName; import org.alfresco.service.transaction.TransactionService; import org.apache.log4j.Logger; //this is the newer version //import com.openerp.model.openerpJavaModel; public class DeleteAsset implements NodeServicePolicies.BeforeDeleteNodePolicy { private PolicyComponent policyComponent; private Behaviour beforeDeleteNode; private NodeService nodeService; public void init() { this.beforeDeleteNode = new JavaBehaviour(this,"beforeDeleteNode",NotificationFrequency.EVERY_EVENT); this.policyComponent.bindClassBehaviour(QName.createQName("http://www.someco.com/model/content/1.0","beforeDeleteNode"), QName.createQName("http://www.someco.com/model/content/1.0","sc:doc"), this.beforeDeleteNode); } public setNodeService(NodeService nodeService){ this.nodeService = nodeService; } @Override public void beforeDeleteNode(NodeRef node) { System.out.println("beforeDeleteNode!"); try { QName attachmentID1= QName.createQName("http://www.someco.com/model/content/1.0", "OpenERPattachmentID1"); // this could/shoul be defined in your OpenERPModel-class int attachmentid = (Integer)nodeService.getProperty(node, attachmentID1); //int attachmentid = 123; URL oracle = new URL("http://0.0.0.0:1885/delete/%20?attachmentid=" + attachmentid); URLConnection yc = oracle.openConnection(); BufferedReader in = new BufferedReader(new InputStreamReader( yc.getInputStream())); String inputLine; while ((inputLine = in.readLine()) != null) //System.out.println(inputLine); in.close(); } catch(Exception e) { e.printStackTrace(); } } } This is my full custom-web-context file: <?xml version='1.0' encoding='UTF-8'?> <!DOCTYPE beans PUBLIC '-//SPRING//DTD BEAN//EN' 'http://www.springframework.org/dtd/spring-beans.dtd'> <beans> <!-- Registration of new models --> <bean id="smartsolution.dictionaryBootstrap" parent="dictionaryModelBootstrap" depends-on="dictionaryBootstrap"> <property name="models"> <list> <value>alfresco/extension/scOpenERPModel.xml</value> </list> </property> </bean> <!-- deletion of attachments within openERP when delete is initiated in Alfresco--> <bean id="DeleteAsset" class="com.openerp.behavior.DeleteAsset" init-method="init"> <property name="NodeService"> <ref bean="NodeService" /> </property> <property name="PolicyComponent"> <ref bean="PolicyComponent" /> </property> </bean> and content type: <type name="sc:doc"> <title>OpenERP Document</title> <parent>cm:content</parent> There's also this when I open share An error has occured in the Share component: /share/service/components/dashlets/my-sites. It responded with a status of 500 - Internal Error. Error Code Information: 500 - An error inside the HTTP server which prevented it from fulfilling the request. Error Message: 09230001 Failed to execute script 'classpath*:alfresco/site-webscripts/org/alfresco/components/dashlets/my-sites.get.js': 09230000 09230001 Failed during processing of IMAP server status configuration from Alfresco: 09230000 Unable to retrieve IMAP server status from Alfresco: 404 Server: Alfresco Spring WebScripts - v1.2.0 (Release 1207) schema 1,000 Time: Oct 23, 2013 11:40:06 AM Click here to view full technical information on the error. Exception: org.alfresco.error.AlfrescoRuntimeException - 09230001 Failed during processing of IMAP server status configuration from Alfresco: 09230000 Unable to retrieve IMAP server status from Alfresco: 404 org.alfresco.web.scripts.SingletonValueProcessorExtension.getSingletonValue(SingletonValueProcessorExtension.java:108) org.alfresco.web.scripts.SingletonValueProcessorExtension.getSingletonValue(SingletonValueProcessorExtension.java:59) org.alfresco.web.scripts.ImapServerStatus.getEnabled(ImapServerStatus.java:49) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) java.lang.reflect.Method.invoke(Method.java:606) org.mozilla.javascript.MemberBox.invoke(MemberBox.java:155) org.mozilla.javascript.JavaMembers.get(JavaMembers.java:117) org.mozilla.javascript.NativeJavaObject.get(NativeJavaObject.java:113) org.mozilla.javascript.ScriptableObject.getProperty(ScriptableObject.java:1544) org.mozilla.javascript.ScriptRuntime.getObjectProp(ScriptRuntime.java:1375) org.mozilla.javascript.ScriptRuntime.getObjectProp(ScriptRuntime.java:1364) org.mozilla.javascript.gen.c6._c1(file:/opt/alfresco-4.2.c/tomcat/webapps/share/WEB-INF/classes/alfresco/site-webscripts/org/alfresco/components/dashlets/my-sites.get.js:4) org.mozilla.javascript.gen.c6.call(file:/opt/alfresco-4.2.c/tomcat/webapps/share/WEB-INF/classes/alfresco/site-webscripts/org/alfresco/components/dashlets/my-sites.get.js) org.mozilla.javascript.optimizer.OptRuntime.callName0(OptRuntime.java:108) org.mozilla.javascript.gen.c6._c0(file:/opt/alfresco-4.2.c/tomcat/webapps/share/WEB-INF/classes/alfresco/site-webscripts/org/alfresco/components/dashlets/my-sites.get.js:51) org.mozilla.javascript.gen.c6.call(file:/opt/alfresco-4.2.c/tomcat/webapps/share/WEB-INF/classes/alfresco/site-webscripts/org/alfresco/components/dashlets/my-sites.get.js) org.mozilla.javascript.ContextFactory.doTopCall(ContextFactory.java:393) org.mozilla.javascript.ScriptRuntime.doTopCall(ScriptRuntime.java:2834) org.mozilla.javascript.gen.c6.call(file:/opt/alfresco-4.2.c/tomcat/webapps/share/WEB-INF/classes/alfresco/site-webscripts/org/alfresco/components/dashlets/my-sites.get.js) org.mozilla.javascript.gen.c6.exec(file:/opt/alfresco-4.2.c/tomcat/webapps/share/WEB-INF/classes/alfresco/site-webscripts/org/alfresco/components/dashlets/my-sites.get.js) org.springframework.extensions.webscripts.processor.JSScriptProcessor.executeScriptImpl(JSScriptProcessor.java:318) org.springframework.extensions.webscripts.processor.JSScriptProcessor.executeScript(JSScriptProcessor.java:192) org.springframework.extensions.webscripts.AbstractWebScript.executeScript(AbstractWebScript.java:1305) org.springframework.extensions.webscripts.DeclarativeWebScript.execute(DeclarativeWebScript.java:86) org.springframework.extensions.webscripts.PresentationContainer.executeScript(PresentationContainer.java:70) org.springframework.extensions.webscripts.LocalWebScriptRuntimeContainer.executeScript(LocalWebScriptRuntimeContainer.java:240) org.springframework.extensions.webscripts.AbstractRuntime.executeScript(AbstractRuntime.java:377) org.springframework.extensions.webscripts.AbstractRuntime.executeScript(AbstractRuntime.java:209) org.springframework.extensions.webscripts.WebScriptProcessor.executeBody(WebScriptProcessor.java:310) org.springframework.extensions.surf.render.AbstractProcessor.execute(AbstractProcessor.java:57) org.springframework.extensions.surf.render.RenderService.process(RenderService.java:599) org.springframework.extensions.surf.render.RenderService.renderSubComponent(RenderService.java:505) org.springframework.extensions.surf.render.RenderService.renderChromeInclude(RenderService.java:1284) org.springframework.extensions.directives.ChromeIncludeFreeMarkerDirective.execute(ChromeIncludeFreeMarkerDirective.java:81) freemarker.core.Environment.visit(Environment.java:274) freemarker.core.UnifiedCall.accept(UnifiedCall.java:126) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.MixedContent.accept(MixedContent.java:92) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.IfBlock.accept(IfBlock.java:82) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.MixedContent.accept(MixedContent.java:92) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.Environment.process(Environment.java:199) org.springframework.extensions.webscripts.processor.FTLTemplateProcessor.process(FTLTemplateProcessor.java:171) org.springframework.extensions.webscripts.WebTemplateProcessor.executeBody(WebTemplateProcessor.java:438) org.springframework.extensions.surf.render.AbstractProcessor.execute(AbstractProcessor.java:57) org.springframework.extensions.surf.render.RenderService.processRenderable(RenderService.java:204) org.springframework.extensions.surf.render.bean.ChromeRenderer.body(ChromeRenderer.java:95) org.springframework.extensions.surf.render.AbstractRenderer.render(AbstractRenderer.java:77) org.springframework.extensions.surf.render.bean.ChromeRenderer.render(ChromeRenderer.java:86) org.springframework.extensions.surf.render.RenderService.processComponent(RenderService.java:432) org.springframework.extensions.surf.render.bean.ComponentRenderer.body(ComponentRenderer.java:94) org.springframework.extensions.surf.render.AbstractRenderer.render(AbstractRenderer.java:77) org.springframework.extensions.surf.render.RenderService.renderComponent(RenderService.java:961) org.springframework.extensions.surf.render.RenderService.renderRegionComponents(RenderService.java:900) org.springframework.extensions.surf.render.RenderService.renderChromeInclude(RenderService.java:1263) org.springframework.extensions.directives.ChromeIncludeFreeMarkerDirective.execute(ChromeIncludeFreeMarkerDirective.java:81) freemarker.core.Environment.visit(Environment.java:274) freemarker.core.UnifiedCall.accept(UnifiedCall.java:126) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.MixedContent.accept(MixedContent.java:92) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.Environment.process(Environment.java:199) org.springframework.extensions.webscripts.processor.FTLTemplateProcessor.process(FTLTemplateProcessor.java:171) org.springframework.extensions.webscripts.WebTemplateProcessor.executeBody(WebTemplateProcessor.java:438) org.springframework.extensions.surf.render.AbstractProcessor.execute(AbstractProcessor.java:57) org.springframework.extensions.surf.render.RenderService.processRenderable(RenderService.java:204) org.springframework.extensions.surf.render.bean.ChromeRenderer.body(ChromeRenderer.java:95) org.springframework.extensions.surf.render.AbstractRenderer.render(AbstractRenderer.java:77) org.springframework.extensions.surf.render.bean.ChromeRenderer.render(ChromeRenderer.java:86) org.springframework.extensions.surf.render.bean.RegionRenderer.body(RegionRenderer.java:99) org.springframework.extensions.surf.render.AbstractRenderer.render(AbstractRenderer.java:77) org.springframework.extensions.surf.render.RenderService.renderRegion(RenderService.java:851) org.springframework.extensions.directives.RegionDirectiveData.render(RegionDirectiveData.java:91) org.springframework.extensions.surf.extensibility.impl.ExtensibilityModelImpl.merge(ExtensibilityModelImpl.java:408) org.springframework.extensions.surf.extensibility.impl.AbstractExtensibilityDirective.merge(AbstractExtensibilityDirective.java:169) org.springframework.extensions.surf.extensibility.impl.AbstractExtensibilityDirective.execute(AbstractExtensibilityDirective.java:137) freemarker.core.Environment.visit(Environment.java:274) freemarker.core.UnifiedCall.accept(UnifiedCall.java:126) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.IteratorBlock$Context.runLoop(IteratorBlock.java:179) freemarker.core.Environment.visit(Environment.java:428) freemarker.core.IteratorBlock.accept(IteratorBlock.java:102) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.MixedContent.accept(MixedContent.java:92) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.IteratorBlock$Context.runLoop(IteratorBlock.java:179) freemarker.core.Environment.visit(Environment.java:428) freemarker.core.IteratorBlock.accept(IteratorBlock.java:102) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.MixedContent.accept(MixedContent.java:92) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.Macro$Context.runMacro(Macro.java:172) freemarker.core.Environment.visit(Environment.java:614) freemarker.core.UnifiedCall.accept(UnifiedCall.java:106) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.IfBlock.accept(IfBlock.java:82) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.Macro$Context.runMacro(Macro.java:172) freemarker.core.Environment.visit(Environment.java:614) freemarker.core.UnifiedCall.accept(UnifiedCall.java:106) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.MixedContent.accept(MixedContent.java:92) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.Environment$3.render(Environment.java:246) org.springframework.extensions.surf.extensibility.impl.DefaultExtensibilityDirectiveData.render(DefaultExtensibilityDirectiveData.java:119) org.springframework.extensions.surf.extensibility.impl.ExtensibilityModelImpl.merge(ExtensibilityModelImpl.java:408) org.springframework.extensions.surf.extensibility.impl.AbstractExtensibilityDirective.merge(AbstractExtensibilityDirective.java:169) org.springframework.extensions.surf.extensibility.impl.AbstractExtensibilityDirective.execute(AbstractExtensibilityDirective.java:137) freemarker.core.Environment.visit(Environment.java:274) freemarker.core.UnifiedCall.accept(UnifiedCall.java:126) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.MixedContent.accept(MixedContent.java:92) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.Environment.visit(Environment.java:406) freemarker.core.BodyInstruction.accept(BodyInstruction.java:93) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.MixedContent.accept(MixedContent.java:92) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.Macro$Context.runMacro(Macro.java:172) freemarker.core.Environment.visit(Environment.java:614) freemarker.core.UnifiedCall.accept(UnifiedCall.java:106) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.MixedContent.accept(MixedContent.java:92) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.Environment.process(Environment.java:199) org.springframework.extensions.webscripts.processor.FTLTemplateProcessor.process(FTLTemplateProcessor.java:171) org.springframework.extensions.webscripts.WebTemplateProcessor.executeBody(WebTemplateProcessor.java:438) org.springframework.extensions.surf.render.AbstractProcessor.execute(AbstractProcessor.java:57) org.springframework.extensions.surf.render.RenderService.processTemplate(RenderService.java:721) org.springframework.extensions.surf.render.bean.TemplateInstanceRenderer.body(TemplateInstanceRenderer.java:140) org.springframework.extensions.surf.render.AbstractRenderer.render(AbstractRenderer.java:77) org.springframework.extensions.surf.render.bean.PageRenderer.body(PageRenderer.java:85) org.springframework.extensions.surf.render.AbstractRenderer.render(AbstractRenderer.java:77) org.springframework.extensions.surf.render.RenderService.renderPage(RenderService.java:762) org.springframework.extensions.surf.mvc.PageView.dispatchPage(PageView.java:411) org.springframework.extensions.surf.mvc.PageView.renderView(PageView.java:306) org.springframework.extensions.surf.mvc.AbstractWebFrameworkView.renderMergedOutputModel(AbstractWebFrameworkView.java:316) org.springframework.web.servlet.view.AbstractView.render(AbstractView.java:250) org.springframework.web.servlet.DispatcherServlet.render(DispatcherServlet.java:1047) org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:817) org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:719) org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:644) org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:549) javax.servlet.http.HttpServlet.service(HttpServlet.java:621) javax.servlet.http.HttpServlet.service(HttpServlet.java:722) org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:305) org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) org.alfresco.web.site.servlet.MTAuthenticationFilter.doFilter(MTAuthenticationFilter.java:74) org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243) org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) org.alfresco.web.site.servlet.SSOAuthenticationFilter.doFilter(SSOAuthenticationFilter.java:374) org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243) org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222) org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123) org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:472) org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:168) org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99) org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:929) org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407) org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1002) org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:585) org.apache.tomcat.util.net.AprEndpoint$SocketWithOptionsProcessor.run(AprEndpoint.java:1771) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:724) Exception: org.springframework.extensions.webscripts.WebScriptException - 09230000 09230001 Failed during processing of IMAP server status configuration from Alfresco: 09230000 Unable to retrieve IMAP server status from Alfresco: 404 org.springframework.extensions.webscripts.processor.JSScriptProcessor.executeScriptImpl(JSScriptProcessor.java:324) Exception: org.springframework.extensions.webscripts.WebScriptException - 09230001 Failed to execute script 'classpath*:alfresco/site-webscripts/org/alfresco/components/dashlets/my-sites.get.js': 09230000 09230001 Failed during processing of IMAP server status configuration from Alfresco: 09230000 Unable to retrieve IMAP server status from Alfresco: 404 org.springframework.extensions.webscripts.processor.JSScriptProcessor.executeScript(JSScriptProcessor.java:200) UPDATE: I think I've found the problem. Being a newbie to eclipse I haven't managed the dependecies well I think. Could anyone link me to a tutorial describing how to get org.alfresco.repo.node.NodeServicePolicies; as seen in import org.alfresco.repo.node.NodeServicePolicies; and other such imports into eclipse, I've got the alfresco source from svn but the tutorial I've found seems to fail me. java/lang/Error\00\F1Unresolved compilation problems: The declared package "com.openerp.behavior" does not match the expected package "java.com.openerp.behavior" The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.apache cannot be resolved The import com.openerp cannot be resolved NodeServicePolicies cannot be resolved to a type PolicyComponent cannot be resolved to a type Behaviour cannot be resolved to a type NodeService cannot be resolved to a type Behaviour cannot be resolved to a type JavaBehaviour cannot be resolved to a type NotificationFrequency cannot be resolved to a variable PolicyComponent cannot be resolved to a type QName cannot be resolved QName cannot be resolved Behaviour cannot be resolved to a type Return type for the method is missing NodeService cannot be resolved to a type NodeService cannot be resolved to a type NodeRef cannot be resolved to a type QName cannot be resolved to a type QName cannot be resolved NodeService cannot be resolved to a type \00\00\00\00\00(Ljava/lang/String;)V\00LineNumberTable\00LocalVariableTable\00this\00'Ljava/com/openerp/behavior/DeleteAsset;\00init\008Unresolved compilation problems: Behaviour cannot be resolved to a type JavaBehaviour cannot be resolved to a type NotificationFrequency cannot be resolved to a variable PolicyComponent cannot be resolved to a type QName cannot be resolved QName cannot be resolved Behaviour cannot be resolved to a type \00(LNodeRef;)V\00\00\B0Unresolved compilation problems: NodeRef cannot be resolved to a type QName cannot be resolved to a type QName cannot be resolved NodeService cannot be resolved to a type

    Read the article

  • Unable to call webservice from another webservice : NETBEANS

    - by PhoeniX
    ############## WEBSERVICE 1 (banking.java) package bank; import client.TestserviceService; import javax.jws.WebMethod; import javax.jws.WebService; import javax.xml.ws.WebServiceRef; @WebService() public class banking { @WebServiceRef(wsdlLocation = "WEB-INF/wsdl/localhost_23164/testwebservice/testserviceService.wsdl") private TestserviceService service; /** * Web service operation */ @WebMethod(operationName = "getBalance") public int getBalance() { //TODO write your implementation code here: int a=-1; try { // Call Web Service Operation client.Testservice port = service.getTestservicePort(); // TODO process result here java.lang.String result = port.getData(); a= Integer.parseInt(result); System.out.println("Result = "+result); } catch (Exception ex) { // TODO handle custom exceptions here } return a; } } ##################### WEB SERVICE 2 package test; import javax.jws.WebService; @WebService() public class testservice { public String getData() { return "3"; } } I am trying to call webservice refernce of webservice 2 from webservice 1 it can be seen in the code I am using netbeans ****************************************** ERROR I AM GETTING IS INFO: parsing WSDL... INFO: [ERROR] Premature end of file. INFO: line 1 of http://localhost:23164/learnwebservice/bankingService?WSDL WARNING: StandardWrapperValve[banking]: PWC1406: Servlet.service() for servlet banking threw exception javax.servlet.ServletException at org.glassfish.webservices.JAXWSServlet.doPost(JAXWSServlet.java:152) at javax.servlet.http.HttpServlet.service(HttpServlet.java:754) at javax.servlet.http.HttpServlet.service(HttpServlet.java:847) at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1523) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:279) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:188) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:641) at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:97) at com.sun.enterprise.web.PESessionLockingStandardPipeline.invoke(PESessionLockingStandardPipeline.java:85) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:185) at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:332) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:233) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:165) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:791) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:693) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:954) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:170) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:135) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:102) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:88) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:76) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:53) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:57) at com.sun.grizzly.ContextTask.run(ContextTask.java:69) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:330) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:309) at java.lang.Thread.run(Thread.java:619) Caused by: javax.servlet.ServletException: Service not found at org.glassfish.webservices.JAXWSServlet.doPost(JAXWSServlet.java:149) ... 26 more WARNING: StandardWrapperValve[banking]: PWC1406: Servlet.service() for servlet banking threw exception javax.servlet.ServletException at org.glassfish.webservices.JAXWSServlet.doPost(JAXWSServlet.java:152) at javax.servlet.http.HttpServlet.service(HttpServlet.java:754) at javax.servlet.http.HttpServlet.service(HttpServlet.java:847) at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1523) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:279) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:188) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:641) at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:97) at com.sun.enterprise.web.PESessionLockingStandardPipeline.invoke(PESessionLockingStandardPipeline.java:85) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:185) at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:332) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:233) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:165) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:791) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:693) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:954) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:170) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:135) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:102) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:88) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:76) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:53) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:57) at com.sun.grizzly.ContextTask.run(ContextTask.java:69) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:330) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:309) at java.lang.Thread.run(Thread.java:619) Caused by: javax.servlet.ServletException: Service not found at org.glassfish.webservices.JAXWSServlet.doPost(JAXWSServlet.java:149) ... 26 more WARNING: StandardWrapperValve[banking]: PWC1406: Servlet.service() for servlet banking threw exception javax.servlet.ServletException at org.glassfish.webservices.JAXWSServlet.doPost(JAXWSServlet.java:152) at javax.servlet.http.HttpServlet.service(HttpServlet.java:754) at javax.servlet.http.HttpServlet.service(HttpServlet.java:847) at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1523) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:279) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:188) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:641) at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:97) at com.sun.enterprise.web.PESessionLockingStandardPipeline.invoke(PESessionLockingStandardPipeline.java:85) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:185) at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:332) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:233) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:165) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:791) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:693) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:954) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:170) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:135) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:102) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:88) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:76) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:53) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:57) at com.sun.grizzly.ContextTask.run(ContextTask.java:69) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:330) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:309) at java.lang.Thread.run(Thread.java:619) Caused by: javax.servlet.ServletException: Service not found at org.glassfish.webservices.JAXWSServlet.doPost(JAXWSServlet.java:149) ... 26 more WARNING: StandardWrapperValve[banking]: PWC1406: Servlet.service() for servlet banking threw exception javax.servlet.ServletException at org.glassfish.webservices.JAXWSServlet.doPost(JAXWSServlet.java:152) at javax.servlet.http.HttpServlet.service(HttpServlet.java:754) at javax.servlet.http.HttpServlet.service(HttpServlet.java:847) at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1523) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:279) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:188) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:641) at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:97) at com.sun.enterprise.web.PESessionLockingStandardPipeline.invoke(PESessionLockingStandardPipeline.java:85) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:185) at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:332) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:233) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:165) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:791) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:693) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:954) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:170) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:135) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:102) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:88) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:76) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:53) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:57) at com.sun.grizzly.ContextTask.run(ContextTask.java:69) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:330) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:309) at java.lang.Thread.run(Thread.java:619) Caused by: javax.servlet.ServletException: Service not found at org.glassfish.webservices.JAXWSServlet.doPost(JAXWSServlet.java:149) ... 26 more INFO: [ERROR] Premature end of file. Failed to read the WSDL document: http//localhost:23164/learnwebservice/bankingService?WSDL, because 1) could not find the document; /2) the document could not be read; 3) the root element of the document is not . INFO: [ERROR] failed.noservice=Could not find wsdl:service in the provided WSDL(s): At least one WSDL with at least one service definition needs to be provided. INFO: Failed to parse the WSDL. INFO: Invoking wsimport with http//localhost:23164/learnwebservice/bankingService?WSDL SEVERE: wsimport failed INFO: parsing WSDL... INFO: [ERROR] Premature end of file. INFO: line 1 of http//localhost:23164/learnwebservice/bankingService?WSDL WARNING: StandardWrapperValve[banking]: PWC1406: Servlet.service() for servlet banking threw exception javax.servlet.ServletException at org.glassfish.webservices.JAXWSServlet.doPost(JAXWSServlet.java:152) at javax.servlet.http.HttpServlet.service(HttpServlet.java:754) at javax.servlet.http.HttpServlet.service(HttpServlet.java:847) at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1523) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:279) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:188) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:641) at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:97) at com.sun.enterprise.web.PESessionLockingStandardPipeline.invoke(PESessionLockingStandardPipeline.java:85) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:185) at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:332) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:233) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:165) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:791) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:693) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:954) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:170) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:135) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:102) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:88) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:76) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:53) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:57) at com.sun.grizzly.ContextTask.run(ContextTask.java:69) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:330) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:309) at java.lang.Thread.run(Thread.java:619) Caused by: javax.servlet.ServletException: Service not found at org.glassfish.webservices.JAXWSServlet.doPost(JAXWSServlet.java:149) ... 26 more WARNING: StandardWrapperValve[banking]: PWC1406: Servlet.service() for servlet banking threw exception javax.servlet.ServletException at org.glassfish.webservices.JAXWSServlet.doPost(JAXWSServlet.java:152) at javax.servlet.http.HttpServlet.service(HttpServlet.java:754) at javax.servlet.http.HttpServlet.service(HttpServlet.java:847) at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1523) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:279) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:188) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:641) at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:97) at com.sun.enterprise.web.PESessionLockingStandardPipeline.invoke(PESessionLockingStandardPipeline.java:85) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:185) at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:332) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:233) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:165) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:791) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:693) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:954) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:170) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:135) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:102) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:88) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:76) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:53) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:57) at com.sun.grizzly.ContextTask.run(ContextTask.java:69) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:330) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:309) at java.lang.Thread.run(Thread.java:619) Caused by: javax.servlet.ServletException: Service not found at org.glassfish.webservices.JAXWSServlet.doPost(JAXWSServlet.java:149) ... 26 more WARNING: StandardWrapperValve[banking]: PWC1406: Servlet.service() for servlet banking threw exception javax.servlet.ServletException at org.glassfish.webservices.JAXWSServlet.doPost(JAXWSServlet.java:152) at javax.servlet.http.HttpServlet.service(HttpServlet.java:754) at javax.servlet.http.HttpServlet.service(HttpServlet.java:847) at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1523) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:279) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:188) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:641) at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:97) at com.sun.enterprise.web.PESessionLockingStandardPipeline.invoke(PESessionLockingStandardPipeline.java:85) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:185) at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:332) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:233) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:165) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:791) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:693) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:954) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:170) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:135) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:102) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:88) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:76) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:53) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:57) at com.sun.grizzly.ContextTask.run(ContextTask.java:69) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:330) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:309) at java.lang.Thread.run(Thread.java:619) Caused by: javax.servlet.ServletException: Service not found at org.glassfish.webservices.JAXWSServlet.doPost(JAXWSServlet.java:149) ... 26 more WARNING: StandardWrapperValve[banking]: PWC1406: Servlet.service() for servlet banking threw exception javax.servlet.ServletException at org.glassfish.webservices.JAXWSServlet.doPost(JAXWSServlet.java:152) at javax.servlet.http.HttpServlet.service(HttpServlet.java:754) at javax.servlet.http.HttpServlet.service(HttpServlet.java:847) at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1523) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:279) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:188) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:641) at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:97) at com.sun.enterprise.web.PESessionLockingStandardPipeline.invoke(PESessionLockingStandardPipeline.java:85) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:185) at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:332) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:233) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:165) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:791) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:693) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:954) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:170) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:135) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:102) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:88) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:76) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:53) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:57) at com.sun.grizzly.ContextTask.run(ContextTask.java:69) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:330) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:309) at java.lang.Thread.run(Thread.java:619) Caused by: javax.servlet.ServletException: Service not found at org.glassfish.webservices.JAXWSServlet.doPost(JAXWSServlet.java:149) ... 26 more INFO: [ERROR] Premature end of file. Failed to read the WSDL document: http//localhost:23164/learnwebservice/bankingService?WSDL, because 1) could not find the document; /2) the document could not be read; 3) the root element of the document is not . INFO: [ERROR] failed.noservice=Could not find wsdl:service in the provided WSDL(s): At least one WSDL with at least one service definition needs to be provided. INFO: Failed to parse the WSDL. INFO: Invoking wsimport with http//localhost:23164/learnwebservice/bankingService?WSDL SEVERE: wsimport failed

    Read the article

  • Oracle Customer Reference Forum – Apex IT – Oracle Sales Cloud

    - by Richard Lefebvre
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Apex IT, an Oracle Platinum Partner, wins Nucleus Research's ROI Award with a 724% return. Learn how you can improve your ROI with Oracle Sales and Marketing Cloud. We are pleased to invite you to a discussion with Apex IT on industry trends, why sales automation is important, the decision making process for choosing Oracle Sales Cloud, and benefits achieved since going live. Apex IT works with clients large and small, assisting them at all stages in the process: organizing ideas and developing strategies, selecting the most appropriate package, implementing it for best results, and keeping systems optimized with long-term support. Please plan to register at least three hours prior to the event taking place in order to participate and get the dial-in information associated in due time. Speakers: Bryan Hinz, Vice President of Business Development, Apex IT (Speaker) Chris Haven, Senior Director Product Management, Oracle (Moderator) Organization Profile: Since 1997, Apex IT has helped public sector, corporate and higher education clients use technology to streamline their processes and increase productivity and profitability. Based on products and best practices from Oracle our experts provide a full range of enterprise solutions including CX/CRM and related applications that support marketing, sales, and service; HR and HR Helpdesk; and Business Intelligence. Our project approach is results-driven and our attitude is people-focused. Industry: Professional Services Products/Services: Oracle Sales Cloud Organization Website: http://apexit.com/ Event Description: In this informal reference call, you will have the opportunity to hear Apex IT discuss industry trends, why sales automation is important, the decision making process for choosing Oracle Sales Cloud, and benefits achieved since going live. The call will open with a brief overview, followed by discussion, and an open question and answer session. Please allow one hour for the call. Why Oracle: Apex IT needed a mobile-enabled sales force automation tool that could promote account collaboration and integrate with Microsoft Outlook. Oracle Sales Cloud met these needs and Apex IT’s requirements for: Improved collaborative selling Improved quality of customer engagement and information Improved business development Improved pipeline management Please plan to register at least three hours prior to the event taking place in order to participate and get the dial-in information associated in due time. After you register your information will be forwarded through an Approval Process. Once your registration request has been validated against the invitation database, you will receive an email confirmation with your registration details as long as there is availability. Please be advised that Apex IT will revise the registrants list and may dismiss registrations as they see fit. Note: To access more information at the corporate site you would need an Oracle.com account. If you do not already have an account, getting one is easy and free. Click on the link and you will be prompted to create an account. After you have created your account, you will be automatically returned to the full page description of this event. Register Now! /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • What is the Oracle Utilities Application Framework?

    - by Anthony Shorten
    The Oracle Utilities Application Framework is a reusable, scalable and flexible java based framework which allows other products to be built, configured and implemented in a standard way. Note: Even though the Framework is built in java it can be integrated with COBOL based extensions for backward compatibility. When Oracle Utilities Customer Care & Billing was migrated from V1 to V2, it was decided that the technical aspects of that product be separated to allow for reuse and independence from technical issues. The idea was that all the technical aspects would be concentrated in this separate product (i.e. a framework) and allow all products using the framework to concentrate on delivering superior functionality. The product was named the Oracle Utilities Application Framework (oufw is the product code). The technical components are contained in the Oracle Utilities Application Framework which can be summarized as follows: Metadata - The Oracle Utilities Application Framework is responsible for defining and using the metadata to define the runtime behavior of the product. All the metadata definition and management is contained within the Oracle Utilities Application Framework. UI Management - The Oracle Utilities Application Framework is responsible for defining and rendering the pages and responsible for ensuring the pages are in the appropriate format for the locale. Integration - The Oracle Utilities Application Framework is responsible for providing the integration points to the architecture. Refer to the Oracle Utilities Application Framework Integration Overview for more details Tools - The Oracle Utilities Application Framework provides a common set of facilities and tools that can be used across all products. Technology - The Oracle Utilities Application Framework is responsible for all technology standards compliance, platform support and integration. There are a number of products from the Tax and Utilities Global Business Unit as well as from the Financial Services Global Business Unit that are built upon the Oracle Utilities Application Framework. These products require the Oracle Utilities Application Framework to be installed first and then the product itself installed onto the framework to complete the installation process. There are a number of key benefits that the Oracle Utilities Application Framework provides to these products: Common facilities - The Oracle Utilities Application Framework provides a standard set of technical facilities that mean that products can concentrate in the unique aspects of their markets rather than making technical decisions. Common methods of configuration - The Oracle Utilities Application Framework standardizes the technical configuration process for a product. Customers can effectively reuse the configuration process across products. Multi-lingual and Multi-platform - The Oracle Utilities Application Framework allows the products to be offered in more markets and across multiple platforms for maximized flexibility. Common methods of implementation - The Oracle Utilities Application Framework standardizes the technical aspects of a product implementation. Customers can effectively reuse the technical implementation process across products. Quicker adoption of new technologies - As new technologies and standards are identified as being important for the product line, they can be integrated centrally benefiting multiple products. Cross product reuse - As enhancements to the Oracle Utilities Application Framework are identified by a particular product, all products can potentially benefit from the enhancement. Note: Use of the Oracle Utilities Application Framework does not preclude the introduction of product specific technologies or facilities to satisfy market needs. The framework minimizes the need and assists in the quick integration of a new product specific piece of technology (if necessary). The Framework is not available as a product itself and is bundled with Tax and Utilities Global Business Unit prodicts. At the present time the following products are on the Framework: Oracle Utilities Customer Care And Billing (V2 and above) Oracle Enterprise Taxation Management (V2 and above) Oracle Utilities Business Intelligence (V2 and above) Oracle Utilities Mobile Workforice Management (V2 and above)

    Read the article

  • SSAS processing error: Client unable to establish connection; 08001; Encryption not supported on the client.; 08001

    - by Kevin Shyr
    After getting the cube to successfully deploy and process on Friday, I was baffled on Monday that the newly added dimension caused the cube processing to break.  I then followed the first instinct, discarded all my changes to reverted back to the version on Friday, and had no luck.  The error message (attached below) did not help as I was looking for some kind of SQL service error.  After examining the windows server log and the SQL server log, I just couldn't see anything wrong with it.After swearing for some time, and with the help of going off and working on something else for a while.  I came back to the solution and looked at the data source.  Even though I know I have never changed the provider (the default setup gave me SQL native client), I decided to change it and give OLE DB a try.This simple change allows my cube to process successfully again.  While I don't understand why the same settings that worked last week doesn't work this week, I don't have all the information to say with certainty that nothing has changed in the environment (firewall changes, server updates, etc.).SSAS processing error:<Batch >  <Parallel>    <Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200">      <Object>        <DatabaseID>DWH Sales Facts</DatabaseID>        <CubeID>DWH Sales Facts</CubeID>      </Object>      <Type>ProcessFull</Type>      <WriteBackTableCreation>UseExisting</WriteBackTableCreation>    </Process>  </Parallel></Batch>                Processing Dimension 'Date' completed.                                Errors and Warnings from Response                OLE DB error: OLE DB or ODBC error: A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online.; 08001; Client unable to establish connection; 08001; Encryption not supported on the client.; 08001.                Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of 'DWH Sales Facts', Name of 'DWH Sales Facts'.                Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'Currency', Name of 'Currency' was being processed.                Errors in the OLAP storage engine: An error occurred while the 'Currency Dim ID' attribute of the 'Currency' dimension from the 'DWH Sales Facts' database was being processed.                Internal error: The operation terminated unsuccessfully.                Server: The operation has been cancelled.

    Read the article

  • Implementing Release Notes in TFS Team Build 2010

    - by Jakob Ehn
    In TFS Team Build (all versions), each build is associated with changesets and work items. To determine which changesets that should be associated with the current build, Team Build finds the label of the “Last Good Build” an then aggregates all changesets up unitl the label for the current build. Basically this means that if your build is failing, every changeset that is checked in will be accumulated in this list until the build is successful. All well, but there uis a dimension missing here, regarding to releases. Often you can run several release builds until you actually deploy the result of the build to a test or production system. When you do this, wouldn’t it be nice to be able to send the customer a nice release note that contain all work items and changeset since the previously deployed version? At our company, we have developed a Release Repository, which basically is a siple web site with a SQL database as storage. Every time we run a Release Build, the resulting installers, zip-files, sql scripts etc, gets pushed into the release repositor together with the relevant build information. This information contains things such as start time, who triggered the build etc. Also, it contains the associated changesets and work items. When deploying the MSI’s for a new version, we mark the build as Deployed in the release repository. The depoyed status is stored in the release repository database, but it could also have been implemented by setting the Build Quality for that build to Deployed. When generating the release notes, the web site simple runs through each release build back to the previous build that was marked as Deplyed, and aggregates the work items and changesets: Here is a sample screenshot on how this looks for a sample build/application The web site is available both for us and also for the customers and testers, which means that they can easily get the latest version of a particular application and at the same time see what changes are included in this version. There is a lot going on in the Release Build Process that drives this in our TFS 2010 server, but in this post I will show how you can access and read the changeset and work item information in a custom activity. Since Team Build associates changesets and work items for each build, this information is (partially) available inside the build process template. The Associate Changesets and Work Items for non-Shelveset Builds activity (located inside the Try  Compile, Test, and Associate Changesets and Work Items activity) defines and populates a variable called associatedWorkItems   You can see that this variable is an IList containing instances of the Changeset class (from the Microsoft.TeamFoundation.VersionControl.Client namespace). Now, if you want to access this variable later on in the build process template, you need to declare a new variable in the corresponding scope and the assign the value to this variable. In this sample, I declared a variable called assocChangesets in the RunAgent sequence, which basically covers the whol compile, test and drop part of the build process:   Now, you need to assign the value from the AssociatedChangesets to this variable. This is done using the Assign workflow activity:   Now you can add a custom activity any where inside the RunAgent sequence and use this variable. NB: Of course your activity must place somewhere after the variable has been poplated. To finish off, here is code snippet that shows how you can read the changeset and work item information from the variable.   First you add an InArgumet on your activity where you can pass i the variable that we defined. [RequiredArgument] public InArgument<IList<Changeset>> AssociatedChangesets { get; set; } Then you can traverse all the changesets in the list, and for each changeset use the WorkItems property to get the work items that were associated in that changeset: foreach (Changeset ch in associatedChangesets) { // Add change theChangesets.Add( new AssociatedChangeset(ch.ChangesetId, ch.ArtifactUri, ch.Committer, ch.Comment, ch.ChangesetId)); foreach (var wi in ch.WorkItems) { theWorkItems.Add( new AssociatedWorkItem(wi["System.AssignedTo"].ToString(), wi.Id, wi["System.State"].ToString(), wi.Title, wi.Type.Name, wi.Id, wi.Uri)); } } NB: AssociatedChangeset and AssociatedWorkItem are custom classes that we use internally for storing this information that is eventually pushed to the release repository.

    Read the article

  • Clean Code: A Handbook of Agile Software Craftsmanship – book review

    - by DigiMortal
       Writing code that is easy read and test is not something that is easy to achieve. Unfortunately there are still way too much programming students who write awful spaghetti after graduating. But there is one really good book that helps you raise your code to new level – your code will be also communication tool for you and your fellow programmers. “Clean Code: A Handbook of Agile Software Craftsmanship” by Robert C. Martin is excellent book that helps you start writing the easily readable code. Of course, you are the one who has to learn and practice but using this book you have very good guide that keeps you going to right direction. You can start writing better code while you read this book and you can do it right in your current projects – you don’t have to create new guestbook or some other simple application to start practicing. Take the project you are working on and start making it better! My special thanks to Robert C. Martin I want to say my special thanks to Robert C. Martin for this book. There are many books that teach you different stuff and usually you have markable learning curve to go before you start getting results. There are many books that show you the direction to go and then leave you alone figuring out how to achieve all that stuff you just read about. Clean Code gives you a lot more – the mental tools to use so you can go your way to clean code being sure you will be soon there. I am reading books as much as I have time for it. Clean Code is top-level book for developers who have to write working code. Before anything else take Clean Code and read it. You will never regret your decision. I promise. Fragment of editorial review “Even bad code can function. But if code isn’t clean, it can bring a development organization to its knees. Every year, countless hours and significant resources are lost because of poorly written code. But it doesn’t have to be that way. What kind of work will you be doing? You’ll be reading code—lots of code. And you will be challenged to think about what’s right about that code, and what’s wrong with it. More importantly, you will be challenged to reassess your professional values and your commitment to your craft. Readers will come away from this book understanding How to tell the difference between good and bad code How to write good code and how to transform bad code into good code How to create good names, good functions, good objects, and good classes How to format code for maximum readability How to implement complete error handling without obscuring code logic How to unit test and practice test-driven development This book is a must for any developer, software engineer, project manager, team lead, or systems analyst with an interest in producing better code.” Table of contents Clean code Meaningful names Functions Comments Formatting Objects and data structures Error handling Boundaries Unit tests Classes Systems Emergence Concurrency Successive refinement JUnit internals Refactoring SerialDate Smells and heuristics A Concurrency II org.jfree.date.SerialDate Cross references of heuristics Epilogue Index

    Read the article

  • Data Quality and Master Data Management Resources

    - by Dejan Sarka
    Many companies or organizations do regular data cleansing. When you cleanse the data, the data quality goes up to some higher level. The data quality level is determined by the amount of work invested in the cleansing. As time passes, the data quality deteriorates, and you need to repeat the cleansing process. If you spend an equal amount of effort as you did with the previous cleansing, you can expect the same level of data quality as you had after the previous cleansing. And then the data quality deteriorates over time again, and the cleansing process starts over and over again. The idea of Data Quality Services is to mitigate the cleansing process. While the amount of time you need to spend on cleansing decreases, you will achieve higher and higher levels of data quality. While cleansing, you learn what types of errors to expect, discover error patterns, find domains of correct values, etc. You don’t throw away this knowledge. You store it and use it to find and correct the same issues automatically during your next cleansing process. The following figure shows this graphically. The idea of master data management, which you can perform with Master Data Services (MDS), is to prevent data quality from deteriorating. Once you reach a particular quality level, the MDS application—together with the defined policies, people, and master data management processes—allow you to maintain this level permanently. This idea is shown in the following picture. OK, now you know what DQS and MDS are about. You can imagine the importance on maintaining the data quality. Here are some resources that help you preparing and executing the data quality (DQ) and master data management (MDM) activities. Books Dejan Sarka and Davide Mauri: Data Quality and Master Data Management with Microsoft SQL Server 2008 R2 – a general introduction to MDM, MDS, and data profiling. Matching explained in depth. Dejan Sarka, Matija Lah and Grega Jerkic: MCTS Self-Paced Training Kit (Exam 70-463): Building Data Warehouses with Microsoft SQL Server 2012 – I wrote quite a few chapters about DQ and MDM, and introduced also SQL Server 2012 DQS. Thomas Redman: Data Quality: The Field Guide – you should start with this book. Thomas Redman is the father of DQ and MDM. Tyler Graham: Microsoft SQL Server 2012 Master Data Services – MDS in depth from a product team mate. Arkady Maydanchik: Data Quality Assessment – data profiling in depth. Tamraparni Dasu, Theodore Johnson: Exploratory Data Mining and Data Cleaning – advanced data profiling with data mining. Forthcoming presentations I am presenting a DQS and MDM seminar at PASS SQL Rally Amsterdam 2013: Wednesday, November 6th, 2013: Enterprise Information Management with SQL Server 2012 – a good kick start to your first DQ and / or MDM project. Courses Data Quality and Master Data Management with SQL Server 2012 – I wrote a 2-day course for SolidQ. If you are interested in this course, which I could also deliver in a shorter seminar way, you can contact your closes SolidQ subsidiary, or, of course, me directly on addresses [email protected] or [email protected]. This course could also complement the existing courseware portfolio of training providers, which are welcome to contact me as well. Start improving the quality of your data now!

    Read the article

  • Oracle WebCenter Portlet Debugging

    - by Alexander Rudat
    IntroductionThis article describes how to debug a portlets that is already deployed to WebLogic server using Oracle JDeveloper 11g.OverviewThese a Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} re the basic steps involved in remote debugging an WebCenter portlets deployed in WebLogic:Configuration of the WebLogic to support remote debuggingConfiguration of the portlet project in JDeveloperActual debugging of the portletConfiguration of the WebLogicTo start the WebLogic server in debugging mode, there are a couple of configuration changes that need to be done to the WebLogic domain where the portlet is deployed.First we need to edit JVM options of the WebLogic server startup script where the portlet is deployed. Normally the startManagedWebLogic.cmd is used to start this managed server.This startup script is located in the %MIDDLEWARE_HOME%\user_projects\domains\<domain_name>\bin  directory, where %MIDDLEWARE_HOME% is the installation directory of WebLogic.Add the following line before the set JAVA_OPTIONS= line:set REMOTE_DEBUG_JAVA_OPTIONS=-Xdebug -Xnoagent -Xrunjdwp:transport=dt_socket,address=4000,server=y,suspend=nChange the set JAVA_OPTIONS= line to read like the one below:set JAVA_OPTIONS=%SAVE_JAVA_OPTIONS% %ADF_JAVA_OPTIONS% %REMOTE_DEBUG_JAVA_OPTIONS%After this changes save the startup script and start the managed server and be sure that you have access to the admin console (for example http://localhost:7001/console).Finally we need to check, that HTTP tunneling is enabled on the managed server. To do this, login to the admin console, select the managed server and select the Protocols tab.Be sure that Enable Tunneling is selected.Configuration of the portletFirst let's create a new run configuration specifically for remote debugging. Double-click the project where you portlets are developed.In the Project Properties select the Run/Debug/Profile page. Click New... to create a new run configuration. In the Create Run Configuration  dialog enter Remote Debugging for the name of the run configuration. Leave the Copy Settings From selection to Default and click OK to create the new run configuration.Once the Remote Debug run configuration is created, select it in the Run Configurations and click Edit... to bring up the Edit Run Configuration dialog. In the Launch Settings page click on the Remote Debugging checkbox to enable remote debugging for this run configuration.Finally select the Remote page and verify that the Protocol is set to Attach to JPDA and the port matches the port specified earlier when configuring WebLogic for remote debugging (defaults to 4000).Actual debugging of the portletTo start the remote debugging profile, right-click on your portlet project and select Start Remote Debugger.Now JDeveloper is asking the host and port specification. If you WebLogic server is installed locally, you can apply the default settings: Set a breakpoint at you java code and run the portal (WebCenter) application, where the portlet is used.That's all, now you are able to debug the portlet java code. Hope you will find all errors in your portlet :-)Referenceshttp://www.oracle.com/technology/products/jdev/howtos/weblogic/remotedebugwls.html

    Read the article

< Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >